Integrating Genomics and Phenomics in Personalised Health

First published 2024

The transition from genomics to phenomics in personalised population health represents a significant shift in approach. This change involves expanding beyond genetic information to encompass a comprehensive view of an individual’s health. It includes analysing various biological levels like the genome, epigenome, proteome, and metabolome, as well as considering lifestyle factors, physiology, and data from electronic health records. This integrative approach enables a more thorough understanding of health and disease, facilitating the development of personalised health strategies. This multifaceted perspective allows for better tracking and interpretation of health metrics, leading to more effective and tailored healthcare interventions.

Profiling the many dimensions of health in the context of personalised population health involves a comprehensive assessment of various biological and environmental factors. The genome, serving as the blueprint of life, is assayed through technologies like single-nucleotide polymorphism chips, whole-exome sequencing, and whole-genome sequencing. These methods identify the genetic predispositions and susceptibilities of individuals, offering insights into their health.

The epigenome, which includes chemical modifications of the DNA, plays a crucial role in gene expression regulation. Techniques like bisulfite sequencing and chromatin immunoprecipitation followed by sequencing have enabled the study of these modifications, revealing their influence on various health conditions like aging and cancer. The epigenome’s responsiveness to external factors like diet and stress highlights its significance in personalised health.

Proteomics, the study of the proteome, involves the analysis of the myriad of proteins present in the body. Advances in mass spectrometry and high-throughput technologies have empowered researchers to explore the complex protein landscape, which is critical for understanding various diseases and physiological processes.

The metabolome, encompassing the complete set of metabolites, reflects the biochemical activity within the body. Metabolomics, through techniques like mass spectrometry, provides insights into the metabolic status and can be crucial in disease diagnosis and monitoring.

The microbiome, consisting of the microorganisms living in and on the human body, is another critical aspect of health profiling. The study of the microbiome, particularly through sequencing technologies, has unveiled its significant role in health and disease, influencing various bodily systems like the immune and digestive systems.

Lifestyle factors and physiology, including diet, exercise, and daily routines, are integral to health profiling. Wearable technologies and digital health tools have revolutionised the way these factors are monitored, providing real-time data on various physiological parameters like heart rate, sleep patterns, and blood glucose levels.

Lastly, electronic health records (EHRs) offer a wealth of clinical data, capturing patient interactions with healthcare systems. The integration of EHRs with other health data provides a comprehensive view of an individual’s health status, aiding in the personalised management of health.

Overall, the multidimensional approach to health profiling, encompassing genomics, epigenomics, proteomics, metabolomics, microbiomics, lifestyle factors, physiology, and EHRs, is pivotal in advancing personalised population health. This integrated perspective enables a more accurate assessment and management of health, moving towards a proactive and personalised healthcare paradigm.

Integrating different data types to track health, understand phenomic signatures of genomic variation, and translate this knowledge into clinical utility is a complex but promising area of personalised population health. The integration of multimodal data, such as genomic and phenomic data, provides a comprehensive understanding of health and disease. This approach involves defining metrics that can accurately track health and reflect the complex interplay between various biological systems.

One key aspect of this integration is understanding the phenomic signatures of genomic variation. Genomic data, such as genetic predispositions and mutations, can be linked to phenomic expressions like protein levels, metabolic profiles, and physiological responses. This connection allows for a deeper understanding of how genetic variations manifest in physical traits and health outcomes. Translating this integrated knowledge into clinical utility involves developing actionable recommendations based on a patient’s unique genomic and phenomic profile. This can lead to more personalised treatment plans, which may include lifestyle changes, diet, medication, or other interventions specifically tailored to an individual’s health profile. For example, the identification of specific biomarkers through deep phenotyping can indicate the onset of certain diseases, like cancer, before clinical symptoms appear.

Another critical element is the application of advanced computational tools and artificial intelligence to analyse and interpret the vast amounts of data generated. These technologies can identify patterns and associations that might not be evident through traditional analysis methods. By effectively integrating and analysing these data, healthcare providers can gain a more detailed and accurate understanding of an individual’s health, leading to better disease prevention, diagnosis, and treatment strategies. The integration of diverse data types in personalised population health therefore represents a significant advancement in our ability to understand and manage health at an individual level.

Adopting personalised approaches to population health presents several challenges and potential solutions. One of the main challenges is the complexity of integrating diverse types of health data, such as genomic, proteomic, metabolomic, and lifestyle data. This integration requires advanced computational tools and algorithms capable of handling large, heterogeneous datasets and extracting meaningful insights from them. Another significant challenge lies in translating these insights into practical, actionable strategies in clinical settings. Personalised health strategies need to be tailored to individual genetic and phenomic profiles, taking into account not only the presence of certain biomarkers or genetic predispositions but also lifestyle factors and environmental exposures.

To address these challenges, solutions include the development of more sophisticated data integration and analysis tools, which can handle the complexity and volume of multimodal health data. Additionally, fostering closer collaboration between researchers, clinicians, and data scientists is crucial to ensure that insights from data analytics are effectively translated into clinical practice. Moreover, there is a need for standardisation in data collection, processing, and analysis to ensure consistency and reliability across different studies and applications. This standardisation also extends to the ethical aspects of handling personal health data, including privacy concerns and data security.

Implementing personalised health approaches also requires a shift in healthcare infrastructure and policies to support these advanced methods. This includes training healthcare professionals in the use of these technologies and ensuring that health systems are equipped to handle and use large amounts of data effectively. While the transition to personalised population health is challenging due to the complexity and novelty of the required approaches, these challenges can be overcome through technological advancements, collaboration across disciplines, standardisation of practices, and supportive healthcare policies.

The main findings and perspectives presented in this essay focus on the transformative potential of integrating genomics and phenomics in personalised population health. This integration enables a more nuanced understanding of individual health profiles, considering not only genetic predispositions but also the expression of these genes in various phenotypes. The comprehensive profiling of health through diverse data types – genomics, proteomics, metabolomics, and others – provides a detailed picture of an individual’s health trajectory. The study of phenomic signatures of genomic variation has emerged as a crucial aspect in understanding how genetic variations manifest in physical and health outcomes. The ability to define metrics that accurately track health, considering both genetic and phenomic data, is seen as a significant advancement. These metrics provide new insights into disease predisposition and progression, allowing for earlier and more precise interventions. However, the translation of these insights into clinical practice poses challenges, primarily due to the complexity and volume of data involved. The need for advanced computational tools and AI to analyse and interpret these data is evident. These tools not only manage the sheer volume of data but also help in discerning patterns and associations that might not be evident through traditional analysis methods.

Despite these challenges, the integration of various health data types is recognised as a pivotal step towards a more personalised approach to healthcare. This approach promises more effective disease prevention, diagnosis, and treatment strategies tailored to individual health profiles. It represents a shift from a one-size-fits-all approach in medicine to one that is predictive, preventative, and personalised.

Links

Yurkovich, J.T., Evans, S.J., Rappaport, N. et al. The transition from genomics to phenomics in personalized population health. Nat Rev Genet (2023). https://doi.org/10.1038/s41576-023-00674-x

https://createanessay4u.wordpress.com/tag/healthcare/

https://createanessay4u.wordpress.com/tag/ai/

https://createanessay4u.wordpress.com/tag/data/

https://www.sciencedirect.com/topics/agricultural-and-biological-sciences/phenomics

https://link.springer.com/journal/43657

https://www.who.int/docs/default-source/gho-documents/global-health-estimates/ghe2019_life-table-methods.pdf

https://www.nature.com/articles/520609a

Redefining Computing with Quantum Advantage

First published 2024

This CreateAnEssay4U special edition brings together the work of previous essays and provides a comprehensive overview of an important technological area of study. For source information, see also:

https://createanessay4u.wordpress.com/tag/quantum/

https://createanessay4u.wordpress.com/tag/computing/

In the constantly changing world of computational science, principles of quantum mechanics are shaping a new frontier, set to transform the foundation of problem-solving and data processing. This emerging frontier is characterised by a search for quantum advantage – a pivotal moment in computing, where quantum computers surpass classical ones in specific tasks. Far from being just a theoretical goal, this concept is a motivating force for the work of physicists, computer scientists, and engineers, aiming to unveil capabilities previously unattainable.

Central to this paradigm shift is the quantum bit or qubit. Unlike classical bits restricted to 0 or 1, qubits operate in a realm of quantum superposition, embodying both states simultaneously. This capability drastically expands computational potential. For example, Google’s quantum computer, Sycamore, used qubits to perform calculations that would be impractical for classical computers, illustrating the profound implications of quantum superposition in computational tasks.

The power of quantum computing stems from the complex interaction of superposition, interference, and entanglement. Interference, similar to the merging of physical waves, manipulates qubits to emphasise correct solutions and suppress incorrect ones. This process is central to quantum algorithms, which, though challenging to develop, harness interference patterns to solve complex problems. An example of this is IBM’s quantum computer, which uses interference to perform complex molecular simulations, a task far beyond the reach of classical computers.

Entanglement in quantum computing creates a unique correlation between qubits, where the state of one qubit is intrinsically tied to another, irrespective of distance. This “spooky action at a distance” allows for a collective computational behavior surpassing classical computing. Quantum entanglement was notably demonstrated in the University of Maryland’s quantum computer, which used entangled qubits to execute algorithms faster than classical computers could.

Quantum computing’s applications are vast. In cryptography, quantum computers can potentially break current encryption algorithms. For instance, quantum algorithms developed at MIT have shown the ability to crack encryption methods that would otherwise be secure against classical computational attacks. This has spurred the development of quantum-resistant algorithms in post-quantum cryptography.

Quantum simulation, a key application of quantum computing, was envisioned by physicist Richard Feynman and is now close to reality. Quantum computers, like those developed at Harvard University, use quantum simulation to model complex molecular structures, significantly impacting drug discovery and material science.

Quantum sensing, an application of quantum information technology, leverages quantum properties for precise measurements. A prototype quantum sensor developed by MIT researchers, capable of detecting various electromagnetic frequencies, exemplifies the advanced capabilities of quantum sensing in fields like medical imaging and environmental monitoring.

The concept of a quantum internet interconnecting quantum computers through secure protocols is another promising application. The University of Chicago’s recent experiments with quantum key distribution demonstrate how quantum cryptography can secure communications against even quantum computational attacks.

Despite these applications, quantum computing faces challenges, particularly in hardware and software development. Quantum computers are prone to decoherence, where qubits lose their quantum properties. Addressing this, researchers at Stanford University have developed techniques to prolong qubit coherence, a crucial step towards practical quantum computing.

The quantum computing landscape is rich with participation from startups and established players like Google and IBM, and bolstered by government investments. These collaborations accelerate advancements, as seen in the development of quantum error correction techniques at the University of California, Berkeley, enhancing the stability and reliability of quantum computations.

Early demonstrations of quantum advantage have been seen in specialised applications. Google’s achievement in using quantum computers for complex tasks like random number generation is an example. However, the threat of a “quantum winter,” a period of reduced interest and investment, looms if practical applications don’t quickly materialise.

In conclusion, quantum advantage represents a turning point in computing, propelled by quantum mechanics. Its journey is complex, with immense potential for reshaping various fields. As this field evolves, it promises to tackle complex problems, from cryptography to material science, marking a transformative phase in technological advancement.

This CreateAnEssay4U special edition brings together the work of previous essays and provides a comprehensive overview of an important technological area of study. For source information, see also:

https://createanessay4u.wordpress.com/tag/quantum/

https://createanessay4u.wordpress.com/tag/computing/

Links

https://www.nature.com/articles/s41586-022-04940-6

https://www.quantumcomputinginc.com/blog/quantum-advantage/

https://www.ft.com/content/e70fa0ce-d792-4bc2-b535-e29969098dc5

https://semiengineering.com/the-race-toward-quantum-advantage/

https://www.cambridge.org/gb/universitypress/subjects/physics/quantum-physics-quantum-information-and-quantum-computation/

The Promise and Challenges of Silver Nanowire Networks

First published 2024

The fascination with artificial intelligence (AI) stems from its ability to handle massive data volumes with superhuman efficiency. Traditional AI systems depend on computers running complex algorithms through artificial neural networks, but these systems consume significant energy, particularly when processing real-time data. To address this, a novel approach to machine intelligence is being pursued, shifting from software-based artificial neural networks to more efficient physical neural networks in hardware, specifically using silver nanowires.

Silver nanowires, only a few nanometres in diameter, offer a more efficient alternative to conventional graphical processing units (GPUs) and neural chips. These nanowires form dense neuron networks, surpassing the efficiency of computer-based AI systems. Their small size allows for more densely packed networks, enhancing information processing speed and complexity. The nanowires’ flexibility and durability further add to their appeal, being adaptable to different configurations and more resistant to wear compared to traditional AI systems. Their operation at the speed of light significantly surpasses the electrical speed of GPUs and neural chips, leading to faster processing speeds. This, coupled with the high conductivity of silver, allows these nanowires to operate at lower voltages, thereby reducing power consumption. Their small size is particularly beneficial for integration into compact devices like smartphones and wearables. Moreover, their ability to multitask means they can process more information in less time, enhancing their suitability for various AI applications.

While the advancements in using silver nanowires for AI are promising, they are accompanied by several challenges. Their high cost limits accessibility, particularly for smaller firms and startups, and the nanowires’ limited availability complicates their integration into a wide range of products. Additionally, the fragility of silver nanowires may compromise their durability, requiring careful handling to prevent damage and making them potentially less robust than traditional GPUs and neural chips. Furthermore, despite their rapid data processing capabilities, silver nanowires may not yet rival the performance of GPUs in high-performance computing or be as efficient in handling large-scale data processing.

In contrast, the field of neuromorphic computing, which aims to replicate the complex neuron topology of the brain using nanomaterials, is making significant strides. Networks composed of silver nanowires and nanoparticles are particularly noteworthy for their resistive switching properties, akin to memristors, which enhance network adaptability and plasticity. A prime example of this is Atomic Switch Networks (ASNs) made of Ag2S junctions. In these networks, dendritic Ag nanowires form interconnected atomic switches, effectively emulating the dense connectivity found in biological neurons. These networks have shown potential in various natural computing paradigms, including reservoir computing, highlighting the diverse applications of these innovative neural network architectures.

Further explorations in creating neuromorphic networks have involved self-assembled networks of nanowires or nanoparticles, such as those formed from metal oxide nanoparticles like gold or tin. These networks display neuromorphic properties due to their resistive switches and show recurrent properties crucial for neuromorphic applications. Such advancements in the field of AI, particularly with the use of silver nanowires, point to a future where computing not only becomes more efficient but also more closely emulates the complex processes of the human brain. These developments indicate the potential for revolutionary changes in how data is processed and learned, paving the way for more advanced and energy-efficient AI systems.

A recent approach in neuromorphic computing demonstrates the capability of neural networks of silver nanowires to learn and recognise handwritten numbers and memorise digit strings, with findings published in Nature Communications (2023), in collaboration with researchers from the University of Sydney and the University of California, Los Angeles. The team employs nanotechnology to create networks of silver nanowires, each about one thousandth the width of a human hair. These networks form randomly, resembling the brain’s neuron network. In these networks, external electrical signals prompt changes at the intersections of nanowires, mimicking the function of biological synapses. With tens of thousands of these synapse-like junctions, these networks efficiently process and transmit information.

A significant aspect of this research is the demonstration of real-time, online machine learning capabilities of nanowire networks, in contrast to conventional batch-based learning in AI. Unlike traditional systems that process data in batches, this approach allows continuous data stream processing, enabling the system to learn and adapt instantly. This “on the fly” learning reduces the need for repetitive data processing and extensive memory requirements, resulting in substantial energy savings and increased efficiency.

The team tested the nanowire network’s learning and memory capabilities using the Modified National Institute of Standards and Technology (MNIST) database of handwritten digits. The network successfully learned and improved its pattern recognition with each new digit, showcasing real-time learning. Additionally, the network was tested on memory tasks involving digit patterns, demonstrating an aptitude for remembering sequences, akin to recalling a phone number.

These experiments highlight the potential of neuromorphic nanowire networks in emulating brain-like learning and memory processes. This research represents just the beginning of unlocking the full capabilities of such networks, indicating a promising future for AI development. The implications of these findings are far-reaching. Nanowire Network (NWN) devices could be used in areas such as natural language processing and image analysis, making the most of their ability to learn and remember dynamic sequences. The study points to the possibility of NWNs contributing to new types of computational applications, moving beyond the traditional limits of the Turing Machine concept and based on real-world physical systems.

In conclusion, the exploration of silver nanowires in artificial intelligence marks a significant shift towards more efficient, brain-like computing. These nanowires, mere nanometres in diameter, present a highly efficient alternative to traditional GPUs and neural chips, forming densely packed neuron networks that excel in processing speed and complexity. Their adaptability, durability, and ability to operate at lower voltages highlight the potential for integration into various compact devices and AI applications.

However, challenges such as high cost, limited availability, and fragility temper the widespread adoption of silver nanowires, along with their current limitations in matching the performance of GPUs in certain high-demand computing tasks. Despite these hurdles, the advancements in neuromorphic computing using silver nanowires and other nanomaterials are promising. Networks like Atomic Switch Networks (ASNs) demonstrate the potential of these materials in replicating the complex connectivity and functionality of biological neurons, paving the way for breakthroughs in natural computing paradigms.

The 2023 study showcasing the online learning and memory capabilities of silver nanowire networks, especially in tasks like recognising handwritten numbers and memorising digit sequences, represents a leap forward in AI research. These networks, capable of processing data streams in real time, offer a more energy-efficient and dynamic approach to machine learning, differing fundamentally from traditional batch-based methods. This approach not only saves energy but also mimics the human brain’s ability to learn and recall quickly and efficiently.

As the field of AI continues to evolve, silver nanowires and neuromorphic networks stand at the forefront of research, potentially revolutionising how data is processed and learned. Their application in areas such as natural language processing and image analysis could harness their unique learning and memory abilities. This research, still in its early stages, opens the door to new computational applications that go beyond conventional paradigms, drawing inspiration from the physical world and the human brain. The future of AI development, influenced by these innovations, holds immense promise for more advanced, efficient, and brain-like artificial intelligence systems.

Links

https://nanografi.com/blog/silver-nanowires-applications-nanografi-blog/

https://www.techtarget.com/searchenterpriseai/definition/neuromorphic-computing

https://www.nature.com/articles/s41467-023-42470-5

https://www.nature.com/articles/s41598-019-51330-6

https://paperswithcode.com/dataset/mnist

The Evolutionary Journey of Artificial Intelligence

First published 2024

The goal to replicate human cognition through machines, known as artificial intelligence (AI), is a discipline that spans a mere six decades. Its roots can be traced back to a period post the Second World War, where AI emerged as a confluence of various scientific fields including mathematical logic, statistics, computational neurobiology, and computer science. Its aim? To mimic human cognitive abilities.

The inception of AI was profoundly influenced by the technological strides during the 1940-1960 period, a phase known as the birth of AI in the annals of cybernetics. This era was characterised by the fusion of technological advancements, catalysed further by the Second World War, and the aspiration to amalgamate the functions of machines and organic beings. Figures like Norbert Wiener envisaged a synthesis of mathematical theory, electronics, and automation to facilitate communication and control in both animals and machines. Furthermore, Warren McCulloch and Walter Pitts developed a pioneering mathematical and computer model of the biological neuron as early as 1943.

By the 1950s, notable contributors like John Von Neumann and Alan Turing were laying the technical groundwork for AI. They transitioned computing from the realm of decimal logic to binary logic, thus setting the stage for modern computing. Turing, in his seminal 1950 article, posited the famous “imitation game” or the Turing Test, an experiment designed to question the intelligence of a machine. Meanwhile, the term “AI” itself was coined by John McCarthy from MIT, further defined by Marvin Minsky as the creation of computer programs that replicate tasks typically accomplished more effectively by humans.

Although the late 1950s were marked by lofty prophecies, such as Herbert Simon’s prediction that AI would soon outperform humans in chess, the subsequent decades were not as generous to AI. Technology, despite its allure, faced a downturn in the early 1960s, primarily due to the limited memory of machines. Notwithstanding these limitations, some foundational elements persisted, like solution trees to solve problems.

Fast-forward to the 1980s and 1990s, AI experienced a resurgence with the emergence of expert systems. This resurgence was ignited by the advent of the first microprocessors in the late 1970s. Systems such as DENDRAL and MYCIN exemplified the potential of AI, offering highly specialised expertise. Despite this boom, by the late 1980s and early 1990s, the fervour surrounding AI diminished once more. Challenges in programming and system maintenance, combined with more straightforward and cost-effective alternatives, rendered AI less appealing.

However, post-2010 marked a transformative era for AI, fuelled predominantly by two factors: unprecedented access to vast data and substantial enhancement in computing power. For instance, prior to this decade, algorithms for tasks such as image classification required intensive manual data sampling. Now, with tools like Google, millions of samples could be accessed with ease. Furthermore, the discovery of the efficiency of graphic card processors in expediting the calculations of learning algorithms presented a game-changer.

These technological advancements spurred notable achievements in AI. For example, Watson, IBM’s AI system, triumphed over human contestants in the game Jeopardy in 2011. Google X’s AI managed to recognise cats in videos, a seemingly trivial task but one that heralded the machine’s learning capacity. These accomplishments symbolise a paradigm shift from the traditional expert systems to an inductive approach. Rather than manual rule coding, machines were now trained to autonomously discover patterns and correlations using vast datasets.

Deep learning, a subset of machine learning, has exhibited significant promise, particularly in tasks like voice and image recognition. Spearheaded by researchers like Geoffrey Hinton, Yoshua Bengio, and Yann LeCun, deep learning has revolutionised fields such as speech recognition. Despite these breakthroughs, challenges persist. While devices can transcribe human speech, the nuanced understanding of human text and intention still eludes AI.

The rise of artificial intelligence in the modern era has led to the emergence of sophisticated systems, one of the most noteworthy being generative AI. Generative AI refers to the subset of artificial intelligence algorithms and models that use techniques from unsupervised machine learning to produce content. It seeks to create new content that resembles the data it has been trained on, for example, images, music, text, or even more complex data structures. It’s a profound leap from the traditional, deterministic AI systems to models that can generate and innovate, mirroring the creative processes that were once thought exclusive to human cognition.

A groundbreaking example of generative AI is the Generative Adversarial Network (GAN). Proposed by Ian Goodfellow and his colleagues in 2014, GANs consist of two neural networks – a generator and a discriminator – that are trained concurrently. The generator produces synthetic data, while the discriminator tries to distinguish between real data and the fake data produced by the generator. Over time, the generator becomes increasingly adept at creating data that the discriminator can’t distinguish from real data. This iterative process has allowed for the creation of incredibly realistic images, artworks, and other types of content. It is analogous to a forger trying to create a painting, while an art detective determines its authenticity. The constant duel between the two refines the forger’s skill, allowing for more realistic and convincing creations.

Another influential instance of generative AI is in the realm of natural language processing. Models like OpenAI’s GPT series have redefined what machines can generate in terms of human-like text. These models use vast amounts of text data to train, allowing them to produce coherent and contextually relevant sentences and paragraphs that can be almost indistinguishable from human-written content. Such advancements are indicative of the vast potential of generative models in various domains, from content creation to human-computer interaction.

However, the capabilities of generative AI have raised pertinent ethical concerns. The ability to generate realistic content, whether as deepfakes in videos or fabricated news articles, poses significant challenges in discerning authenticity in the digital age. Misuse of these technologies can lead to misinformation, identity theft, and other forms of cyber deception. Consequently, as researchers and practitioners continue to refine and push the boundaries of generative AI, there’s an imperative need to consider the broader societal implications and integrate safeguards against potential misuse.

Despite these concerns, the potential benefits of generative AI are undeniable. From personalised content generation in media and entertainment to rapid prototyping in design and manufacturing, the applications are vast. Moreover, generative models hold promise in scientific domains as well, aiding in drug discovery or simulating complex environmental models, thus facilitating our understanding and addressing some of the most pressing challenges of our times. As the landscape of AI continues to evolve, generative models undoubtedly stand as a testament to both the creative potential of machines and the ingenuity of human researchers who develop them.

In summary, the journey of AI has been one characterised by remarkable inventions, paradigm shifts, and periods of scepticism and renaissance. While AI has demonstrated capabilities previously thought to be the exclusive domain of humans, it is essential to note that current achievements are still categorised as “weak” or “moderate” AIs. The dream of a “strong” AI, which can autonomously contextualise and solve a diverse array of specialised problems, remains confined to the pages of science fiction. Nevertheless, as history has shown, the relentless human pursuit of knowledge, coupled with technological advancements, continues to push the boundaries of what is conceivable.

Links

https://ourworldindata.org/brief-history-of-ai

https://www.ibm.com/topics/artificial-intelligence

https://courses.cs.washington.edu/courses/csep590/06au/projects/history-ai.pdf

https://journals.sagepub.com/doi/abs/10.1177/0008125619864925

Neuromorphic Computing: Bridging Brains and Machines

First published 2024

Neuromorphic Computing represents a significant leap in the field of artificial intelligence, marking a shift towards systems that are inspired by the human brain’s structure and functionality. This innovative approach aims to replicate the complex processes of neural networks within the brain, thereby offering a new perspective on how artificial intelligence can be developed and applied. The potential of Neuromorphic Computing is vast, encompassing enhancements in efficiency, adaptability, and learning capabilities. However, this field is not without its challenges and ethical considerations. These complexities necessitate a thorough and critical analysis to understand Neuromorphic Computing’s potential impact on the future of computing and AI technologies. This essay examines these aspects, exploring the transformative nature of Neuromorphic Computing and its implications for the broader landscape of technology and artificial intelligence.

The emergence of Neuromorphic Computing signifies a pivotal development in artificial intelligence, with its foundations deeply rooted in the emulation of the human brain’s processing capabilities. This novel field of technology harnesses the principles of neural networks and brain-inspired algorithms, evolving over time to create computing systems that not only replicate brain functions but also introduce a new paradigm in computational efficiency and problem-solving. Neuromorphic Computing operates by imitating the complex network of neurons in the human brain, requiring the analysis of unstructured data in a manner that rivals the energy-efficient biological brain. Human brains, consuming less than 20 watts of power, outperform supercomputers in terms of energy efficiency. In Neuromorphic Computing, this is emulated through spiking neural networks (SNNs), where artificial neurons are layered and can independently fire, communicating with each other to initiate changes in response to stimuli.

A significant advancement in this field was IBM’s demonstration of in-memory computing in 2017, using one million phase change memory devices for storing and processing information. This development, an extension of IBM’s earlier neuromorphic chip TrueNorth, marked a major reduction in power consumption for neuromorphic computers. The SNN chip used in this instance featured one million programmable neurons and 256 million programmable synapses, offering a massively parallel architecture that is both energy-efficient and powerful.

The evolution of neuromorphic hardware has been further propelled by the creation of nanoscale memristive devices, also known as memristors. These devices, functioning similarly to human synapses, store information in their resistance/conductance states and modulate conductivity based on their programming history. Memristors demonstrate synaptic efficacy and plasticity, mirroring the brain’s ability to form new pathways based on new information. This technology contributes to what is known as a massively parallel, manycore supercomputer architecture, exemplified by projects like SpiNNaker, which aims to model up to a billion biological neurons in real time.

Neuromorphic devices are increasingly used to complement and enhance traditional computing technologies such as CPUs, GPUs, and FPGA. They are capable of performing complex and high-performance tasks, such as learning, searching, and sensing, with remarkably low power consumption. An example of their real-world application includes instant voice recognition in mobile phones, which operates without the need for cloud-based processing. This integration of neuromorphic systems with conventional computing technologies marks a significant step in the evolution of AI, redefining how machines learn, process information, and interact with their environment.

Intel’s development of the Loihi chip marks a significant advancement in Neuromorphic Computing. The transition from Loihi 1 to Loihi 2 signifies more than just an upgrade in technology; it represents a convergence of neuromorphic and traditional AI accelerator architectures. This evolution blurs the previously distinct lines between the two, creating a new landscape for AI and computing. Loihi 2 introduces an innovative approach to neural processing, incorporating spikes of varying magnitudes rather than adhering to the binary values typical of traditional computing models. This advancement not only mirrors the complex functionalities of the human brain more closely but also challenges the conventional norms of computing architecture. Furthermore, the enhanced programmability of Loihi 2, capable of supporting a diverse range of neuron models, further distinguishes it from traditional computing. This flexibility allows for more intricate and varied neural network designs, pushing the boundaries of what is possible in artificial intelligence and computing.

Neuromorphic Computing, particularly through the use of Intel’s Loihi 2 chip, is finding practical applications in various complex neuron models such as resonate-and-fire models and Hopf resonators. These models are particularly useful in addressing challenging real-world optimisation problems. By harnessing the capabilities of Loihi 2, these neuron models can effectively process and solve intricate tasks that conventional computing systems struggle with. Additionally, the application of spiking neural networks, as seen in the use of Loihi 2, offers a new perspective when compared to deep learning-based networks. These networks are being increasingly applied in areas like recurrent neural networks, showcasing their potential in handling tasks that require complex, iterative processing. This shift is not just a theoretical advancement but is gradually being validated in practical scenarios, where their application in optimisation problems is demonstrating the real-world efficacy of Neuromorphic Computing.

Neuromorphic Computing, exemplified by developments like Intel’s Loihi chip, boasts significant advantages such as enhanced energy efficiency, adaptability, and advanced learning capabilities. These features mark a substantial improvement over traditional computing paradigms, especially in tasks requiring complex, iterative processing. However, the field faces several challenges. Training regimes for neuromorphic systems, software maturity, and issues related to compatibility with backpropagation algorithms present hurdles. Additionally, the reliance on dedicated hardware accelerators highlights infrastructural and investment needs. Looking ahead, the potential for commercialisation, especially in energy-sensitive sectors like space and aerospace, paints a promising future for Neuromorphic Computing. This potential is anchored in the technology’s ability to provide efficient and adaptable solutions to complex computational problems, a critical requirement in these industries.

When comparing Neuromorphic Computing with other AI paradigms, distinct technical challenges and advantages come to the forefront. Neuromorphic systems, such as those leveraging Intel’s Loihi chip, distinguish themselves through the integration of stateful neurons and the implementation of sparse network designs. These features enable a more efficient and biologically realistic simulation of neural processes, a stark contrast to the dense, often power-intensive architectures of traditional AI models. However, these advantages are not without their challenges. The unique nature of neuromorphic architectures means that standard AI training methods and algorithms, such as backpropagation, are not directly applicable, necessitating the development of new approaches and methodologies. This dichotomy highlights the innovative potential of neuromorphic computing while underscoring the need for continued research and development in this evolving field.

This essay has thoroughly explored Neuromorphic Computing within artificial intelligence, a field profoundly shaped by the complex workings of the human brain. It critically examined its development, key technical features, real-world applications, and notable challenges, particularly in training and software development. This analysis highlighted the significant advantages of Neuromorphic Computing, such as energy efficiency and adaptability, while also acknowledging its current limitations. Looking forward, the future of Neuromorphic Computing seems bright, especially in specialised areas like aerospace, where its unique features could lead to significant breakthroughs. As this technology evolves, its potential to transform the computing and AI landscape becomes increasingly apparent.

Links

https://techxplore.com/news/2023-11-neuromorphic-team-hardware-mimics-human.html

https://www.nature.com/articles/s43588-021-00184-y

https://www.intel.com/content/www/us/en/research/neuromorphic-computing.html

https://www.silicon.co.uk/expert-advice/the-high-performance-low-power-promise-of-neuromorphic-computing

Advancing Bioinformatics: Integrating Data and Dynamics

First published 2023

Bioinformatics, as a field, has undergone a significant transformation since its inception in the 1970s by pioneers like Dr. Paulien Hogeweg. Initially conceptualised as a study of biological systems through the lens of information processing, it has evolved in response to the changing landscape of biology and technology. The early days of bioinformatics were marked by theoretical approaches, focusing on understanding biological processes as informational sequences. This perspective was foundational in establishing bioinformatics as a distinct discipline, differentiating it from more traditional biological studies.

The advent of advanced experimental techniques and a surge in computing power marked a pivotal shift in bioinformatics. This era ushered in an unprecedented ability to collect and analyse large datasets, transforming bioinformatics into a heavily data-driven field. This shift, while enabling groundbreaking discoveries, also brought to light new challenges. One of the primary concerns has been the tendency to prioritise data analysis over a deep understanding of underlying biological processes. This imbalance risks overlooking the complexity and nuances of biological systems, potentially leading to superficial interpretations of data.

Dr. Hogeweg’s contributions, notably the integration of Darwinian evolution with self-organising processes and the development of the Cellular Potts model, highlight the importance of interdisciplinary approaches in bioinformatics. Her work exemplifies how combining evolutionary theory with computational models can lead to more robust and holistic understandings of biological phenomena. The Cellular Potts model, in particular, has been instrumental in studying cell dynamics, offering insights into how cells interact and evolve over time in a multi-scale context.

The research paper, “Simulation of Biological Cell Sorting Using a Two-Dimensional Extended Potts Model” by Francois Graner and James A. Glazier (1992), presents a critical advancement in the field of bioinformatics, particularly in the area of cellular biology modelling. Their work offers a detailed exploration of how cells sort themselves into distinct groups, a fundamental process in embryonic development and tissue formation. Using a modified version of the large-Q Potts model, the researchers simulated the sorting of two types of biological cells, focusing on the role of differential adhesivity and the dynamics of cell movement.

Graner and Glazier’s study is a prime example of how computational models in bioinformatics can provide insights into complex biological phenomena. Their simulation demonstrates how differences in intercellular adhesion can influence the final configuration of cell sorting. This insight is crucial for understanding how cells organise themselves into tissues and organs, and has implications for developmental biology and regenerative medicine. The use of the Potts model, typically applied in physics for studying phenomena like grain growth in metals, underscores the interdisciplinary nature of bioinformatics. This cross-disciplinary approach allows for the application of theories and methods from one field to solve problems in another, amplifying the potential for discovery and innovation.

Furthermore, the study highlights the ongoing challenge in bioinformatics of accurately modelling biological processes. While the simulation provides valuable insights, it also underscores the limitations inherent in computational models. The simplifications and assumptions necessary for such models may not fully capture the intricacies of biological systems. This gap between model and reality is a critical area of focus in bioinformatics, where researchers continually strive to refine their models for greater accuracy and applicability.

Incorporating these findings into the broader context of bioinformatics, it becomes clear that the field is not just about managing and analysing biological data, but also about understanding the fundamental principles that govern biological systems. The work of Graner and Glazier exemplifies how bioinformatics can bridge the gap between theoretical models and practical, real-world biological applications. This balance between theoretical exploration and practical application is what continues to drive the field forward, offering new perspectives and tools to explore the complexity of life.

The paper “How amoeboids self-organize into a fruiting body: Multicellular coordination in Dictyostelium discoideum” by Athanasius F. M. Maree and Paulien Hogeweg (2001) provides a fascinating glimpse into the self-organising mechanisms of cellular systems. Their research focuses on the cellular slime mold Dictyostelium discoideum, a model organism for studying cell sorting, differentiation, and movement in a multi-cellular context. The researchers use a computer simulation to demonstrate how individual amoebae, when starved, aggregate and form a multicellular structure – a process crucial for understanding the principles of cell movement, differentiation, and morphogenesis.

This study is particularly relevant in the context of bioinformatics and computational biology, as it exemplifies the application of computational models to unravel complex biological processes. The use of a two-dimensional extended Potts model, a cellular automaton model, in simulating the morphogenesis of Dictyostelium discoideum showcases the potential of bioinformatics tools in providing insights into biological phenomena that are difficult to observe directly.

One of the key findings of Maree and Hogeweg’s work is the demonstration of how simple rules at the cellular level can lead to complex behavior at the multicellular level. Their model reveals that the coordination of cell movement, influenced by factors like cAMP signaling, differential adhesion, and cell differentiation, is sufficient to explain the formation of the fruiting body in Dictyostelium discoideum. This insight underscores the importance of understanding cellular interactions and signalling pathways in multicellular organisms, a major focus area in bioinformatics.

Moreover, their research contributes to a deeper understanding of the principles of self-organisation in biological systems. The study shows that multicellular coordination and morphogenesis are not just the result of genetic programming but also involve complex interactions between cells and their environment. This perspective is vital for bioinformatics, which often strives to elucidate the interplay between genetic information and the dynamic biological processes it influences.

In the broader context of bioinformatics, the work of Maree and Hogeweg serves as a reminder of the importance of interdisciplinary approaches. By integrating concepts from physics, computer science, and biology, they have provided a framework that can be applied to other biological systems, enhancing our understanding of developmental biology, tissue engineering, and regenerative medicine. Their research exemplifies how bioinformatics can bridge the gap between data analysis and theoretical modelling, contributing to a comprehensive understanding of life’s complexity.

Looking ahead, bioinformatics faces the challenge of integrating dynamic modelling with complex data analysis. This integration is crucial for advancing our understanding of biological systems, particularly in understanding how they behave and evolve over time. Dr. Hogeweg’s current work on multilevel evolution models is a step towards this integration, aiming to bridge the gap between high-level data analysis and the underlying biological processes.

In conclusion, bioinformatics has come a long way from its initial theoretical roots. The field now stands at a crossroads, with the potential to profoundly impact our understanding of biology. However, this potential can only be fully realised by maintaining a balance between data analysis and the comprehension of biological processes, a challenge that will define the future trajectory of bioinformatics. The pioneering work of researchers like Dr. Hogeweg serves as a guiding light in this work, emphasising the importance of interdisciplinary approaches and the need for models that can encapsulate the dynamic nature of biological systems.

Links

Graner, F., & Glazier, J. A. (1992). Simulation of biological cell sorting using a two-dimensional extended Potts model. Physical review letters69(13), 2013–2016. https://doi.org/10.1103/PhysRevLett.69.2013

Marée, A. F., & Hogeweg, P. (2001). How amoeboids self-organize into a fruiting body: multicellular coordination in Dictyostelium discoideum. Proceedings of the National Academy of Sciences of the United States of America98(7), 3879–3883. https://doi.org/10.1073/pnas.061535198

https://www.genome.gov/genetics-glossary/Bioinformatics

https://link.springer.com/chapter/10.1007/978-3-7643-8123-3_5

https://academic.oup.com/bioinformatics

https://www.mdpi.com/journal/biomedicines/special_issues/ND04QUA43D

Artificial Intelligence for Diabetic Eye Disease

First published 2023

Diabetes is a widespread chronic condition, with an estimated 463 million adults affected globally in 2019, a number projected to rise to 600 million by 2040. The rate of diabetes among Chinese adults has escalated from 9.7% in 2010 to 12.8% in 2018. This condition can cause serious damage to various body systems, notably leading to diabetic retinopathy (DR), a major complication that affects approximately 34.6% of diabetic patients worldwide and is a leading cause of blindness in the working-age population. The prevalence of DR is significant in various regions, including China (18.45%), India (17.6%), and the United States (33.2%).

DR often goes unnoticed in its initial stages as it does not affect vision immediately, resulting in many patients missing early diagnosis and treatment, which are crucial for preventing vision impairment. The disease is characterised by distinct retinal vascular abnormalities and can be categorised based on severity into stages ranging from no apparent retinopathy to proliferative DR, the most advanced form. Diabetic macular edema (DME), another condition that can occur at any DR stage, involves fluid accumulation in the retina and is independently assessed due to its potential to impair vision severely.

Diagnosis of DR and DME is typically made through various methods such as ophthalmoscopy, biomicroscopy, fundus photography, optical coherence tomography (OCT), and other imaging techniques. While ophthalmoscopes and slit lamps are common due to their affordability, fundus photography is the international standard for DR screening. OCT, despite its higher cost, is increasingly recognised for its diagnostic value but is not universally accessible for screening purposes.

The current status of diabetic retinopathy (DR) screening emphasises early detection to improve outcomes for diabetic patients. In the United States, the American Academy of Ophthalmology recommends annual eye exams for individuals with type 1 diabetes beginning five years after diagnosis, and immediate annual exams for those with type 2 diabetes upon diagnosis. Despite these guidelines, compliance with screening is low; a significant proportion of diabetic patients do not receive regular eye exams, with only a small percentage adhering to the recommended screening intervals.

In the United Kingdom, a national diabetic eye screening program initiated in 2003 has been credited with reducing DR as the leading cause of blindness among the working-age population. The program’s success is attributed to the high screening coverage of diabetic individuals nationwide.

Non-compliance with screening recommendations is attributed to factors such as a lack of disease awareness, limited access to medical resources, and insufficient medical insurance. Patients with more severe DR or those who already have vision impairment tend to comply more with screening, suggesting that the lack of symptoms in early DR leads to underestimation of the need for regular check-ups.

The use of telemedicine has been proposed to increase accessibility to screening, exemplified by the Singapore Integrated Diabetic Retinopathy Program, which remotely obtains fundus images for evaluation, reducing medical costs. Telemedicine has been found cost-effective, especially in large populations. Recently, the development of artificial intelligence (AI) has presented an alternative to enhance patient compliance and the efficiency of telemedicine in DR screening. AI can potentially streamline the grading of fundus images, reducing reliance on human resources and improving the screening process.

AI’s origins trace back to 1956 when McCarthy first introduced the concept. Shortly after, in 1959, Arthur Samuel coined the term “machine learning” (ML), emphasising the ability of machines to learn from data without being explicitly programmed. Deep learning (DL), a subset of ML, uses multi-layer neural networks for learning; within this, convolutional neural networks (CNNs) are specialised for image processing, featuring layers designed for pattern recognition.

CNN architectures like AlexNet, VGGNet, and ResNet have been pivotal in advancing AI, achieving high accuracy through end-to-end training on labelled image datasets and optimising parameters via backpropagation algorithms. Transfer learning, another ML technique, leverages pre-trained models on new domains, allowing for effective learning from smaller datasets.

In the medical field, AI’s image processing capabilities have significantly impacted radiology, dermatology, pathology, and ophthalmology. Specifically in ophthalmology, AI assists in diagnosing conditions like DR, glaucoma, and macular degeneration. The FDA’s 2018 approval of the first AI software for DR, IDx-DR, marked a milestone, using Topcon NW400 for capturing fundus images and analysing them via a cloud server to provide diagnostic guidance.

Further developments in AI for ophthalmology include EyeArt and Retmarker DR, both recognised for their high sensitivity and specificity in DR detection. These AI systems have demonstrated advantages in efficiency, accuracy, and reduced demand for human resources. They’ve shown to not only expedite the screening process, as evidenced by an Australian study where AI-based screening took about 7 minutes per patient, but also to outperform manual screenings in both accuracy and patient preference.

AI’s ability to analyse fundus photographs or OCT images at primary care facilities simplifies the screening process, potentially improving patient compliance and significantly reducing ophthalmologists’ workloads. With AI providing immediate grading and recommendations for follow-up or referral, diabetic patients can more easily access and undergo screening, therefore enhancing the management of DR.

To ensure the efficacy and accuracy of AI-based diagnostic systems for diabetic retinopathy (DR), it is crucial to have a well-structured dataset that is divided into separate non-overlapping sets for training, validation, and testing. In the development of AI-based diagnostic systems for diseases such as diabetic retinopathy, the dataset is meticulously organised into three distinct categories—each with a specific function in the training and validation of the algorithm. The training set forms the foundation, where the AI algorithm learns to identify and interpret fundus photographs; this set must be extensive and comprise high-quality images that have been carefully evaluated and labelled by expert ophthalmologists. As per the guidelines provided by Chinese authorities, if the system uses fundus photographs, these images should be collected from a minimum of two different medical institutions to ensure a varied and comprehensive learning experience. Concurrently, the validation set plays a pivotal role in refining the AI parameters, acting as a tool for algorithm optimisation during the development process. Lastly, the testing set is paramount for the real-world evaluation of the AI system’s clinical performance. To preserve the integrity of the results, this set is kept separate from the training and validation sets, preventing any potential biases that could skew the system’s accuracy in practical applications.

The training set should have a diverse range of images, including at least 1,000 single-field FPs or 1,000 pairs of two-field FPs, 500 non-readable FP images or pairs, and 500 images or pairs showing other fundus diseases besides DR. The images should be graded by at least three qualified ophthalmologists, with the majority opinion determining the final grade. For standard testing, a set should include 5,000 FPs or pairs, with no fewer than 2,500 images or pairs for DR stage I and above, and 500 images or pairs for other fundus diseases. A random selection of 2,000 images or pairs should be used to evaluate the AI system’s performance on the DR stages.

Current research has indicated some issues with the training sets used in existing AI systems. These include the use of FPs from a single source and including fewer than the recommended 500 non-readable images or pairs. Furthermore, some training sets sourced from online datasets do not provide access to important patient demographics like gender and age, which can be crucial for comprehensive training and accurate diagnostics.

The Iowa Detection Program (IDP) is an early example of an AI system for diabetic retinopathy (DR) screening that showed promise in Caucasian and African populations by grading fundus photographs (FP) and identifying characteristic lesions, albeit without employing deep learning (DL) techniques. Its sensitivity was commendable, but it suffered from low specificity. In contrast, IDx-DR incorporated a convolutional neural network (CNN) into the IDP framework, enhancing the specificity of DR detection. Clinical studies highlighted that while IDx-DR’s sensitivity in real-world settings didn’t quite match its testing set performance, it nonetheless demonstrated a satisfactory balance of sensitivity and specificity.

EyeArt expanded AI’s reach into mobile technology, becoming the first system to detect DR using smartphones. A study in India involving 296 type 2 diabetes patients revealed a very high sensitivity and reasonable specificity, proving its potential for remote DR screening. Moreover, systems like Google’s AI for DR screening can adjust sensitivity and specificity thresholds to meet clinical needs, suggesting that a hybrid approach of AI and manual screening could maximise efficiency and minimize missed referable DR cases.

However, most AI systems for DR rely on FPs, which are limited to two dimensions and can only detect diabetic macular edema (DME) through the presence of hard exudates in the posterior pole, potentially missing some cases. Optical coherence tomography (OCT), with its higher detection rate for DME, offers a more advanced diagnostic tool. Combining OCT with AI has led to the development of systems with impressive sensitivity, specificity, and area under the curve (AUC) metrics, as reflected in various studies. Despite these advancements, challenges such as accessibility remain, especially in resource-limited areas, as demonstrated by Hwang et al’s AI system for OCT, which still necessitates OCT equipment and the transfer of images to a smartphone, indicating that issues of accessibility for patients in underserved regions persist.

The landscape of AI-based diagnostic systems for diabetic retinopathy (DR) is expansive, yet it confronts numerous challenges. Many systems are trained on online datasets such as Messidor and EyePACS, which are limited by homogeneity in image sources and quality, as well as disease scope. These datasets often fail to encapsulate the diversity of real-world clinical environments, leading to potential misdiagnoses. A lack of standardised protocols for algorithm training exacerbates this, with the variability in sample sizes, image quality, and study designs from different sources undermining the generalisability of these AI systems.

Furthermore, while most research adheres to the International Clinical Diabetic Retinopathy Severity Scale for classifying DR severity, debates continue about its suitability. Some argue that classifications like the Early Treatment Diabetic Retinopathy Study may be more appropriate, as they could reduce unnecessary referrals by better reflecting the slower progression of milder DR forms. Inconsistencies in classification standards among studies affect both algorithm validity and cross-study comparisons.

Compounding these issues is the absence of a unified criterion for evaluating AI algorithms, with significant discrepancies in testing sets and performance metrics such as sensitivity, specificity, and area under the curve (AUC) across studies. Without universal benchmarks, comparing and validating these tools remains challenging. Moreover, AI diagnostics suffer from the “black box” phenomenon—the opaque nature of the decision-making process within AI systems. This obscurity impedes understanding and trust in the algorithms, as users cannot ascertain the rationale behind the AI’s assessments or intervene if necessary.

Legal and ethical concerns also arise, particularly regarding liability for misdiagnoses. The responsibility cannot squarely fall on either the developers or the medical practitioners using AI systems. Presently, this has restricted AI’s application primarily to DR screening. When compounded with obstacles such as cataracts, unclear media, or poor patient cooperation, the reliance on AI is reduced, necessitating ophthalmologist involvement.

Patient data security represents another critical issue. As AI systems for diabetes screening could process vast amounts of personal information, ensuring this data’s use solely for medical purposes and preventing breaches is paramount.

Finally, there’s the limitation of disease specificity in AI systems, where most are trained to detect only DR during fundus examinations. However, some studies have reported AI systems capable of identifying multiple conditions simultaneously, like age-related macular degeneration alongside DR, which could streamline diagnostic processes if widely adopted. Addressing these multifaceted challenges is crucial for the advancement and reliable integration of AI into ophthalmic diagnostics.

Artificial intelligence (AI) holds considerable promise in the field of diabetic retinopathy (DR) screening and diagnosis, with the potential to reshape current approaches significantly. The future could see the proliferation of AI systems designed for portable devices, such as smartphones, enabling patients to conduct DR screenings at home, which may drastically reduce the dependency on professional medical staff and advanced medical equipment. This shift could make DR screening much more accessible, particularly under the constraints imposed by events like the COVID-19 pandemic, where telemedicine’s importance has surged, providing vast benefits and convenience to both patients and healthcare providers.

Most AI-assisted DR screening systems currently rely on traditional fundus imaging. However, as newer examination techniques evolve, AI is expected to integrate with diverse types of ocular assessments, such as multispectral fundus imaging and optical coherence tomography (OCT), which could further enhance diagnostic accuracy. Beyond screening, AI is poised to play a crucial role in DR diagnosis. Some studies have already shown that AI can match or even surpass the sensitivity of human ophthalmologists, supporting the potential of AI-assisted systems to augment the diagnostic process with higher precision and efficiency.

Overall, in countries where DR screening programs are established, integrating AI-based diagnostic systems could significantly alleviate human resource burdens and boost operational efficiency. Despite the optimism, the datasets currently used to train AI algorithms are somewhat restricted in scope. For AI to be more broadly applicable in clinical settings, it’s essential to leverage diverse clinical resources to create more varied datasets and to refine standards for image quality and labeling, ensuring AI systems are both standardised and effective. At this juncture, the technology is not yet at a point where it can replace ophthalmologists entirely. Therefore, in the interim, a combined approach where AI complements the work of medical professionals may offer the most realistic and advantageous path forward for the clinical adoption of AI in DR management.

Links

https://www.gov.uk/guidance/diabetic-eye-screening-programme-overview

https://drc.bmj.com/content/5/1/e000333

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9559815/

https://www.mdpi.com/2504-2289/6/4/152

https://www.thelancet.com/journals/landig/article/PIIS2589-7500(20)30250-8/fulltext

https://diabetesatlas.org/

https://pubmed.ncbi.nlm.nih.gov/20580421/

https://www.aao.org/education/preferred-practice-pattern/diabetic-retinopathy-ppp

https://pubmed.ncbi.nlm.nih.gov/27726962/

https://onlinelibrary.wiley.com/doi/10.1046/j.1464-5491.2000.00338.x

https://iovs.arvojournals.org/article.aspx?articleid=2565719

Quantum Shift: Preparing for Post-Quantum Cryptography

First published 2023

The 2020 white paper “Preparing for Quantum Safe Cryptography” explores the profound impact of quantum computing on cybersecurity, a field where quantum mechanics principles could revolutionise or disrupt cryptographic practices. Quantum computers, though offering unmatched computational power, are currently limited by high error rates. Future advancements promise more efficient quantum computers, which pose a considerable threat to existing public key cryptography (PKC) algorithms. Vulnerable algorithms like RSA and those based on the discrete logarithm problem, crucial for key establishment and digital signatures in secure communication, could be easily compromised by these advanced quantum computers.

A significant concern is the potential for future decryption of currently encrypted data by quantum computers, especially data that requires long-term protection. Additionally, these quantum computers, particularly a cryptographically-relevant quantum computer (CRQC), could be used for forging digital signatures or tampering with signed data. However, the effect of quantum computing on symmetric cryptography, which includes algorithms like AES with minimum 128-bit keys and secure hash functions like SHA-256, is comparatively minor, as they remain resilient against quantum attacks.

The white paper recommends that the most effective defence against quantum computing threats lies in adopting post-quantum cryptography (PQC), also known as quantum-safe or quantum-resistant cryptography. PQC algorithms are uniquely designed to be secure against both traditional and quantum computing attacks, and are expected to replace the current vulnerable public key cryptography (PKC) algorithms used for key establishment and digital signatures. However, integrating PQC algorithms into existing systems may not be straightforward, prompting the paper to advise system owners to begin preparing for this transition.

The shift to PQC will differ based on the type of IT systems being used. For general users of commodity IT, such as those using standard browsers and operating systems, the transition to PQC is anticipated to be smooth, largely unnoticed, and rolled out through regular software updates. Here, system owners are encouraged to follow the National Cyber Security Centre’s (NCSC) guidelines to ensure their devices and software are up-to-date, facilitating a seamless switch to PQC.

On the other hand, for enterprise IT systems, which serve the more complex requirements of large organisations, a more active approach is needed. Owners of these systems should start conversations with their IT suppliers about incorporating PQC into their products, ensuring that as PQC becomes a standard, their systems remain compatible and secure.

For systems using bespoke IT or operational technology, such as proprietary communications systems or unique architectures, choosing the right post-quantum cryptography (PQC) algorithms and protocols requires a more intricate decision-making process. Technical system and risk owners of these systems must engage in a detailed evaluation of PQC options, tailoring their choices to meet the specific demands of their unique systems.

Financial planning is essential for all technical system and risk owners, whether they manage enterprise-level or custom-designed systems. Integrating the upgrade to PQC into the regular technology refresh cycles of the organisation is ideal. However, this planning hinges on the finalisation of PQC standards and the availability of their implementations. Such a strategic approach promises a more efficient and cost-effective shift to PQC.

Since 2016, the National Institute of Standards and Technology (NIST) has played a pivotal role in standardising PQC algorithms. This significant endeavour has attracted extensive attention and contributions from the global cryptography community. This standardisation process is under the watchful eyes of major standards-defining bodies, such as the Internet Engineering Task Force (IETF) and the European Telecommunications Standards Institute (ETSI). The IETF is concentrating on updating existing protocols to withstand quantum computing threats, while ETSI is focused on providing guidance for the migration and deployment of these new standards.

The National Institute of Standards and Technology (NIST) has achieved notable progress in selecting key establishment and digital signature algorithms for post-quantum cryptography (PQC). For key establishment, ML-KEM (CRYSTALS-Kyber) has been chosen, while for digital signatures, three algorithms – ML-DSA (CRYSTALS-Dilithium), SLH-DSA (SPHINCS+), and FALCON – have been selected. Additionally, two stateful hash-based signature algorithms, Leighton-Micali Signatures (LMS) and the eXtended Merkle Signature Scheme (XMSS), have been standardised. These are quantum-resistant but are optimal for specific use cases only.

In August 2023, draft standards for ML-KEM, ML-DSA, and SLH-DSA were released, with the final standards anticipated in 2024. The draft standards for FALCON are still pending release. These drafts provide developers an opportunity to integrate and test these algorithms in their systems in preparation for the final release. However, the National Cyber Security Centre (NCSC) cautions against using implementations based on these draft standards in operational systems, as changes before finalisation could lead to compatibility issues with the ultimate standards.

To effectively use these algorithms across the internet and other networks, they need to be woven into existing protocols. The Internet Engineering Task Force (IETF) is in the process of revising widely-used security protocols to include PQC algorithms in mechanisms like key exchange and digital signatures for protocols such as TLS and IPsec. As these post-quantum protocol implementations by the IETF are subject to change until they are formalised as RFCs (Request for Comments), the NCSC strongly advises operational systems to use protocol implementations based on these RFCs, rather than on preliminary Internet Drafts.

The National Cyber Security Centre (NCSC) has recommended a range of algorithms for cryptographic functions, each tailored for specific uses and requirements. The ML-KEM algorithm, as described in NIST Draft – FIPS 203, is a key establishment algorithm designed for general use in creating cryptographic keys. For digital signatures, ML-DSA is detailed in NIST Draft – FIPS 204, making it suitable for various applications needing digital signatures. Additionally, SLH-DSA, outlined in NIST Draft – FIPS 205, is another digital signature algorithm, but it’s specifically intended for scenarios like firmware and software signing where speed is less critical. LMS and XMSS, both detailed in NIST SP 800-208, are digital signature algorithms based on hash functions, primarily used for signing firmware and software, and require careful state management to ensure security.

These algorithms are compatible with multiple parameter sets, allowing adaptation to different security needs. Smaller parameter sets, while less demanding on resources, provide lower security margins and are better suited for data that is either less sensitive or not stored for long periods. Conversely, larger parameter sets offer increased security but require more computational power and result in larger keys or signatures. The choice of a parameter set should be based on the sensitivity and longevity of the data, or the validity period of digital signatures. Importantly, all these parameter sets meet security standards for personal, enterprise, and official government information. For most scenarios, the NCSC recommends ML-KEM-768 and ML-DSA-65 due to their optimal balance of security and efficiency.

Unlike algorithms such as ML-DSA and FALCON, hash-based signatures like SLH-DSA, LMS, and XMSS are generally not suitable for all purposes because of their larger signature sizes and slower performance. However, they are well-suited for situations where speed is not a primary concern, like in firmware and software signing. The security of LMS and XMSS is heavily dependent on proper state management, ensuring that one-time keys are never reused. SLH-DSA serves as a robust alternative in contexts where managing state is challenging, but this comes with the downside of larger signatures and longer signing times. As of August 2023, LMS and XMSS are available as final standards, while SLH-DSA remains a draft standard.

Defined in an IETF Draft, Post-quantum traditional (PQ/T) hybrid schemes merge post-quantum cryptography (PQC) algorithms with traditional public key cryptography (PKC) algorithms. These hybrid systems pair similar types of algorithms, like a PQC signature algorithm with a traditional PKC signature algorithm, to create a combined signature scheme.

Though PQ/T hybrid schemes are more costly and complex than single-algorithm systems, they are less efficient and harder to implement and maintain. Despite these drawbacks, they are crucial in certain scenarios. For example, as large networks gradually adopt PQC, there’s a transitional phase where both PQC and traditional PKC algorithms must be supported concurrently. This makes PQ/T hybrids essential for maintaining interoperability across systems with varying security policies and easing the transition to exclusively using PQC. Since PQC is still developing, combining it with traditional PKC can enhance overall system security, a beneficial approach until the reliability of PQC is fully established. Additionally, PQ/T hybrids may be necessary due to protocol constraints that make exclusive use of PQC challenging, such as avoiding IP layer fragmentation in IKEv2.

Implementing PQ/T hybrid key establishment schemes, such as those in the draft for Hybrid TLS or the design for IKE (RFC 9370), has been approached in a straightforward, backward-compatible way. However, it’s vital to ensure these hybrids don’t introduce new vulnerabilities, a focus of current ETSI efforts. PQ/T hybrid schemes for authentication are more complex and less studied than those for confidentiality. They require robust verification of both signatures, adding to their complexity. In public key infrastructures (PKIs), updating an individual signature algorithm is difficult, and PQ/T hybrid authentication might necessitate either a PKI that handles both traditional and post-quantum signatures or two separate PKIs. Due to the complexity and challenges in transitioning PKIs, a direct shift to a fully post-quantum PKI is often preferred over a temporary PQ/T hybrid PKI.

Looking forward, if a cryptographically relevant quantum computer (CRQC) becomes operational, traditional PKC algorithms will not provide additional protection. In such a scenario, a PQ/T hybrid scheme would offer no more security than a sole post-quantum algorithm, but with greater complexity and overhead. Therefore, the NCSC recommends viewing PQ/T hybrids as a transitional measure, facilitating an eventual shift to a PQC-only system.

In summary, technical system and risk owners need to weigh the benefits and drawbacks of PQ/T hybrid schemes carefully. These include considerations of interoperability, implementation security, and protocol constraints, balanced against the complexities and costs of maintaining such systems. Additionally, they should be prepared for a two-step migration process: initially transitioning to a PQ/T hybrid scheme and ultimately moving to an exclusively PQC-based system.

Links

https://www.ncsc.gov.uk/whitepaper/preparing-for-quantum-safe-cryptography

https://csrc.nist.gov/projects/post-quantum-cryptography

https://www.cisa.gov/quantum

https://www.ncsc.gov.uk/whitepaper/quantum-security-technologies

https://www.etsi.org/deliver/etsi_tr/103600_103699/103619/01.01.01_60/tr_103619v010101p.pdf

https://nvlpubs.nist.gov/nistpubs/CSWP/NIST.CSWP.05262020-draft.pdf

The Advantages of Quantum Algorithms Over Classical Limitations of Computation

First published 2023

The dawn of the 21st century has witnessed technological advancements that are nothing short of revolutionary. In this cascade of innovation, quantum computing emerges as a frontier, challenging our conventional understanding of computation and promising to reshape industries. For countries aiming to be at the cutting edge of technological progress, quantum computing isn’t just a scientific endeavour; it’s a strategic imperative. The United Kingdom, with its rich history of pioneering scientific breakthroughs, has recognised this and has positioned itself as a forerunner in the quantum revolution. As the UK dives deep into research, development, and commercialisation of quantum technologies, it’s crucial to grasp how quantum algorithms differentiate themselves from classical ones and why they matter in the grander scheme of global competition and innovation.

In the world of computing, classical computers have been the backbone for all computational tasks for decades. These devices, powered by bits that exist in one of two states (0 or 1), have undergone rapid advancements, allowing for incredible feats of computation and innovation. However, despite these strides, there are problems that remain intractable for classical systems. This is where quantum computers, and the algorithms they utilise, offer a paradigm shift. They harness the principles of quantum mechanics to solve problems that are beyond the reach of classical machines.

At the heart of a quantum computer is the quantum bit, or qubit. Unlike the classical bit, which can be either 0 or 1, a qubit can exist in a superposition of both states simultaneously. This allows quantum computers to explore multiple possibilities at once. Furthermore, qubits exhibit another quantum property called entanglement, wherein the state of one qubit can be dependent on the state of another, regardless of the distance between them. These two properties—superposition and entanglement—enable quantum computers to perform certain calculations exponentially faster than their classical counterparts.

One of the most celebrated quantum algorithms is Shor’s algorithm, which factors large numbers exponentially faster than the best-known classical algorithms. Factoring may seem like a simple arithmetic task, but when numbers are sufficiently large, classical computers struggle to factor them in a reasonable amount of time. This is crucial in the world of cryptography, where the security of many encryption schemes relies on the difficulty of factoring large numbers. Should quantum computers scale up to handle large numbers, they could potentially break many of the cryptographic systems in use today.

Another problem where quantum computers show promise is in the simulation of quantum systems. As one might imagine, a quantum system is best described using the principles of quantum mechanics. Classical computers face challenges when simulating large quantum systems, such as complex molecules, because they do not naturally operate using quantum principles. A quantum computer, however, can simulate these systems more naturally and efficiently, which could lead to breakthroughs in fields like chemistry, material science, and drug discovery.

Delving deeper into the potential of quantum computing in chemistry and drug discovery, we find a realm of possibilities previously thought to be unreachable. Quantum simulations can provide insights into the behavior of molecules at an atomic level, revealing nuances of molecular interactions, bonding, and reactivity. For instance, understanding the exact behavior of proteins and enzymes in biological systems can be daunting for classical computers due to the vast number of possible configurations and interactions. Quantum computers can provide a more precise and comprehensive view of these molecular dynamics. Such detailed insights can drastically accelerate the drug discovery process, allowing researchers to predict how potential drug molecules might interact with biological systems, potentially leading to the creation of more effective and targeted therapeutic agents. Additionally, by simulating complex chemical reactions quantum mechanically, we can also uncover new pathways to synthesise materials with desired properties, paving the way for innovations in material science.

Furthermore, Grover’s algorithm is another quantum marvel. While not exponential, this algorithm searches an unsorted database in a time roughly proportional to the square root of the size of the database, which is faster than any classical algorithm can achieve. This speedup, while moderate compared to the exponential gains of Shor’s algorithm, still showcases the unique advantages of quantum computation.

However, it’s important to note that quantum computers aren’t simply “faster” versions of classical computers. They don’t speed up every computational task. For instance, basic arithmetic or word processing tasks won’t see exponential benefits from quantum computing. Instead, they offer a fundamentally different way of computing that’s especially suited to certain types of problems. One notable example is the quantum Fourier transform, a key component in Shor’s algorithm, which allows for efficient periodicity detection—a task that’s computationally intensive for classical machines. Another example is quantum annealing, which finds the minimum of a complex function, a process invaluable for optimisation problems. Quantum computers also excel in linear algebra operations, which can be advantageous in machine learning and data analysis. As the field of quantum computing progresses, alongside the discovery of more quantum algorithms like the Harrow-Hassidim-Lloyd (HHL) algorithm for linear system equations, we can expect to uncover an even broader range of problems for which quantum solutions provide a significant edge.

In conclusion, the realm of quantum computing, driven by the unique properties of quantum mechanics, offers the potential to revolutionise how we approach certain computational problems. From cryptography to quantum simulation, quantum algorithms leverage the power of qubits to solve problems that remain intractable for classical machines. As our understanding and capabilities in this domain expand, the boundary between what is computationally possible and impossible may shift in ways we can’t yet fully predict.

Links

https://www.bcg.com/publications/2018/coming-quantum-leap-computing

https://research.ibm.com/blog/factor-15-shors-algorithm

https://aisel.aisnet.org/jais/vol17/iss2/3/

https://research.tudelft.nl/files/80143709/DATE_2020_Realizing_qalgorithms.pdf

https://ieeexplore.ieee.org/document/9222275

https://www.nature.com/articles/s41592-020-01004-3

The Implications of Artificial Intelligence Integration within the NHS

First published 2023

This CreateAnEssay4U special edition brings together the work of previous essays and provides a comprehensive overview of an important technological area of study. For source information, see also:

https://createanessay4u.wordpress.com/tag/ai/

https://createanessay4u.wordpress.com/tag/nhs/

The advent and subsequent proliferation of Artificial Intelligence (AI) have ushered in an era of profound transformation across various sectors. Notably, within the domain of healthcare, and more specifically within the context of the United Kingdom’s National Health Service (NHS), AI’s incorporation has engendered a myriad of both unparalleled opportunities and formidable challenges. From an academic perspective, there is a burgeoning consensus that AI might be poised to rank among the most salient and transformative developments in the annals of human progression. It is neither hyperbole nor mere conjecture to assert that the innovations stemming from AI hold the potential to redefine the contours of our societal paradigms. In the ensuing discourse, we shall embark on a rigorous exploration of the multifaceted impacts of AI within the NHS, striving to delineate the promise it holds while concurrently interrogating the potential pitfalls and challenges intrinsic to such profound technological integration.

Medical Imaging and Diagnostic Services play a pivotal role in the modern healthcare landscape, and the integration of AI within this domain has brought forth noteworthy advancements. AI’s robust capabilities for image analysis have not only enhanced the precision in diagnostics but also broadened the scope of early detection across a variety of diseases. Radiology professionals, for instance, increasingly leverage these advanced tools to identify diseases at early stages and thereby minimise diagnostic errors. Echocardiography charts, used to gauge heart patterns and detect conditions such as ischemic heart disease, are another beneficiary of AI’s analytical prowess. An example of this is the Ultromics platform from a hospital in Oxford, which employs AI to meticulously analyse echocardiography scans.

Moreover, the application of AI in diagnostics transcends cardiological needs. From detecting skin and breast cancer, eye diseases, pneumonia, to even predicting psychotic occurrences, AI’s potential in medical diagnostics is vast and promising. Neurological conditions like Parkinson’s disease can be identified through AI tools that examine speech patterns, predicting its onset and progression. In the realm of endocrinology, a study used machine learning models to foretell the onset of diabetes, revealing that a two-class augmented decision tree was most effective in predicting diabetes-associated variables.

Furthermore, the global threat of COVID-19 in 2019 also saw AI playing a crucial role in early detection and diagnosis. Numerous medical imaging tools, encompassing X-rays, CT scans, and ultrasounds, employed AI techniques to assist in the timely diagnosis of the virus. Recent studies have spotlighted AI’s efficacy in differentiating COVID-19 from other conditions like pneumonia using imaging modalities like CT scans and X-rays. The surge in AI-based diagnostic tools, such as the deep learning model known as the transformer, facilitates efficient management of COVID-19 cases by offering rapid and precise analyses. Notably, the ImageNet-pretrained vision transformer was used to identify COVID-19 cases using chest X-ray images, showcasing the adaptability and precision of AI in response to pressing global health challenges.

Moreover, advancements in AI aren’t limited to diagnostic models alone. The field has seen the emergence of tools like Generative Adversarial Networks (GANs), which have considerably influenced radiological practices. Comprising a generator that produces images mirroring real ones, and a discriminator that differentiates between the two, GANs have the potential to redefine radiological operations. Such networks can replicate training images and create new ones with the training dataset’s characteristics. This technological advancement has not only aided in tasks like abnormal detection and image synthesis but has also posed challenges even for experienced radiologists, as discerning between GAN-generated and real images becomes increasingly intricate.

Education and research also stand to benefit immensely from such advancements. GANs have the potential to swiftly generate training material and simulations, addressing gaps in student understanding. As an example, if students struggle to differentiate between specific medical conditions in radiographs, GANs could produce relevant samples for clearer understanding. Additionally, GANs’ capacity to model placebo groups based on historical data can revolutionise clinical trials by minimising costs and broadening the scope of treatment arms.

Furthermore, the role of AI in offering virtual patient care cannot be overstated. In a time where in-person visits to medical facilities posed risks, AI-powered tools bridged the gap by facilitating remote consultations and care. Moreover, the management of electronic health records has been vastly streamlined due to AI, reducing the administrative workload of healthcare professionals. It’s also reshaping the dynamics of patient engagement, ensuring they adhere to their treatment plans more effectively.

The impact of AI on healthcare has transcended beyond diagnostics, imaging, and patient care, making significant inroads into drug discovery and development. AI-driven technologies, drawing upon machine learning, bioinformatics, and cheminformatics, are revolutionising the realm of pharmacology and therapeutics. With the increasing challenges and sky-high costs associated with drug discovery, these technologies streamline the processes and drastically reduce the time and financial investments required. Historical precedents, like the AI-based robot scientist named Eve, stand as a testament to this potential. Eve not only accelerated the drug development process but also ensured its cost-effectiveness.

AI’s capabilities are not just confined to the initial phase of scouting potential molecules in the field of drug discovery. There’s a promise that AI could engage more dynamically throughout the drug discovery continuum in the near future. The numerous AI-aided drug discovery successes in the literature are a testament to this potential. A notable instance is the work by Toronto-based firm, deep genomics. Harnessing the power of an AI workbench platform, they identified a novel genetic target and consequently developed the drug candidate DG12P1, aimed at treating a rare genetic variant of Wilsons’ disease.

One of the crucial aspects of drug development lies in identifying novel drug targets, as this could pave the way for pioneering first-in-class clinical drugs. AI proves indispensable here. It not only helps in spotting potential hit and lead compounds but also facilitates rapid validation of drug targets and the subsequent refinement in drug structure design. Another noteworthy application of AI in drug development is its ability to predict potential interactions between drugs and their targets. This capability is invaluable for drug repurposing, enabling existing drugs to swiftly progress to subsequent phases of clinical trials.

Moreover, with the data-intensive nature of pharmacological research, AI tools can be harnessed to sift through massive repositories of scientific literature, including patents and research publications. By doing so, these tools can identify novel drug targets and generate innovative therapeutic concepts. For effective drug development, models can be trained on extensive volumes of scientific data, ensuring that the ensuing predictions or recommendations are rooted in comprehensive research.

Furthermore, AI’s applications aren’t just limited to drug discovery and design. It’s making tangible contributions in drug screening as well. Numerous algorithms, such as extreme learning machines, deep neural networks (DNNs), random forests (RF), support vector machines (SVMs), and nearest-neighbour classifiers, are now at the forefront of virtual screening. These are employed based on their synthesis viability and their capacity to predict in vivo toxicity and activity, thereby ensuring that potential drug candidates are both effective and safe.

The proliferation of AI in various sectors has brought along with it a range of ethical and social concerns that intersect with broader questions about technology, data usage, and automation. Central among these concerns is the question of accountability. As AI systems become more integrated into decision-making processes, especially in sensitive areas like healthcare, who is held accountable when things go wrong? The possibility of AI systems making flawed decisions, often due to intrinsic biases in the datasets they are trained on, can lead to catastrophic outcomes. An illustration of such a flaw was observed in an AI application that misjudged pneumonia-related complications and potentially jeopardised patients’ health. These erroneous decisions, often opaque in nature due to the intricate inner workings of machine learning algorithms, further fuel concerns about transparency and accountability.

Transparency, or the lack thereof, in AI systems poses its own set of challenges. As machine learning models continually refine and recalibrate their parameters, understanding their decision-making process becomes elusive. This obfuscation often referred to as the ‘black-box’ phenomenon, hampers trust and understanding. The branch of AI research known as “Explainable Artificial Intelligence (XAI)” attempts to remedy this by making the decision-making processes of AI models understandable to humans. Through XAI, healthcare professionals and patients can glean insights into the rationale behind diagnostic decisions made by AI systems. Furthermore, this enhances the trust quotient, as evidenced by studies that underscore the importance of visual feedback in fostering trust in AI models.

Another prominent concern is the potential reinforcement of existing societal biases. AI systems, trained on historically accumulated data, can inadvertently perpetuate and even amplify biases present in the data, leading to skewed and unjust outcomes. This is particularly alarming in healthcare, where decisions can be a matter of life and death. This threat is further compounded by data privacy and security issues. AI systems that process sensitive patient information become prime targets for cyberattacks, risking unauthorised access or tampering of data, with motives ranging from financial gain to malicious intent.

The rapid integration of AI technologies in healthcare underscores the need for robust governance. Proper governance structures ensure that regulatory, ethical, and trust-related challenges are proactively addressed, thereby fostering confidence and optimising health outcomes. On an international level, regulatory measures are being established to guide the application of AI in domains requiring stringent oversight, such as healthcare. The European Union, for instance, introduced the GDPR in 2018, setting forth data protection standards. More recently, the European Commission proposed the Artificial Intelligence Act (AIA), a regulatory framework designed to ensure the responsible adoption of AI technologies, mandating rigorous assessments for high-risk AI systems.

From a technical standpoint, there are further substantial challenges to surmount. For AI to be practically beneficial in healthcare settings, it needs to be user-friendly for healthcare professionals (HCPs). The technical intricacies involved in setting up and maintaining AI infrastructure, along with concerns of data storage and validity, often act as deterrents. AI models, while potent, are not infallible. They can manifest shortcomings, such as biases or a susceptibility to being easily misled. It is, therefore, imperative for healthcare providers to strategise effectively for the seamless implementation of AI systems, addressing costs, infrastructure needs, and training requirements for HCPs.

The perceived opaqueness of AI-driven clinical decision support systems often makes HCPs sceptical. This, combined with concerns about the potential risks associated with AI, acts as a barrier to its widespread adoption. It is thus imperative to emphasise solutions like XAI to bolster trust and overcome the hesitancy surrounding AI adoption. Furthermore, integrating AI training into medical curricula can go a long way in ensuring its safe and informed usage in the future. Addressing these challenges head-on, in tandem with fostering a collaborative environment involving all stakeholders, will be pivotal for the responsible and effective proliferation of AI in healthcare. Recent events, such as the COVID-19 pandemic and its global implications alongside the Ukraine war, underline the pressing need for transformative technologies like AI, especially when health systems are stretched thin.

Given these advancements, it is pivotal however to scrutinise the sources of this information. Although formal conflicts of interest should be declared in publications, authors may have subconscious biases, for and against, the implementation of AI in healthcare, which may influence the authors’ interpretations of the data. Discussions are inevitable regarding published research, particularly since the concept of ‘false positive findings’ came to the forefront in 2005 in a review by John Ioannidis (“Why Most Published Research Findings Are False”). The observation that journals are biased in publishing more papers that have positive rather than negative findings both skews the total body of the evidence and underscores the need for studies to be accurate, representative, and negligibly biased. When dealing with AI, where the risks are substantial, relying solely on justifiable scientific evidence becomes imperative. Studies that are used for the implementation of AI systems should be well mediated by a neutral and independent third party to ensure that any advancements in AI system implementations are based solely on justified scientific evidence, and not on personal opinions, commercial interests or political views.

The evidence reviewed undeniably points to the potential of AI in healthcare. There is no doubt that there is real benefit in a wide range of areas. AI can enable services to be run more efficiently, allow selection of patients who are most likely to benefit from a treatment, boost the development of drugs, and accurately recognise, diagnose, and treat diseases and conditions.

However, with these advancements come challenges. We identified some key areas of risk: the creation of good quality big data and the importance of consent; the data risks such as bias and poor data quality; the issue of a black box (lack of transparency of algorithms); data poisoning; and data security. Workforce issues were also identified: how AI works with the current workforce and the fear of workforce replacement; the risk of de-skilling; and the need for education and training, and embedding change. It was also identified that there is a current need for research into use, cost-effectiveness, and long-term outcomes of AI systems. There will always be a risk of bias, error chance statistical improbabilities, in research and published studies fundamentally due to the nature of science itself. Yet, the aim is to have a body of evidence that helps create a consensus of opinion.

In summary, the transformative power of AI in the healthcare sector is unequivocal, offering advancements that have the potential to reshape patient care, diagnostics, drug development, and a myriad of other domains. These innovations, while promising, come hand in hand with significant ethical, social, and technical challenges that require careful navigation. The dual-edged sword of AI’s potential brings to light the importance of transparency, ethical considerations, and robust governance in its application. Equally paramount is the need for rigorous scientific evaluation, with an emphasis on neutrality and comprehensive evidence to ensure AI’s benefits are realised without compromising patient safety and care quality. As the healthcare landscape continues to evolve, it becomes imperative for stakeholders to strike a balance between leveraging AI’s revolutionary capabilities and addressing its inherent challenges, all while placing the well-being of patients at the forefront.

This CreateAnEssay4U special edition brings together the work of previous essays and provides a comprehensive overview of an important technological area of study. For source information, see also:

https://createanessay4u.wordpress.com/tag/ai/

https://createanessay4u.wordpress.com/tag/nhs/

Links

https://www.gs1ca.org/documents/digital_health-affht.pdf

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7670110/

https://www.who.int/emergencies/diseases/novel-coronavirus-2019/technical-guidance/naming-the-coronavirus-disease-(COVID-2019)-and-the-virus-that-causes-it

https://www.rcpjournals.org/content/futurehosp/9/2/113

https://doi.org/10.1016%2Fj.icte.2020.10.002

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9151356/

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7908833/

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8285156/

https://pubmed.ncbi.nlm.nih.gov/32665978

https://doi.org/10.1016%2Fj.ijin.2022.05.002

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8669585/

https://scholar.google.com/scholar_lookup?journal=Med.+Image+Anal.&title=Transformers+in+medical+imaging:+A+survey&author=F.+Shamshad&author=S.+Khan&author=S.W.+Zamir&author=M.H.+Khan&author=M.+Hayat&publication_year=2023&pages=102802&pmid=37315483&doi=10.1016/j.media.2023.102802&

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8421632/

https://www.who.int/docs/defaultsource/documents/gs4dhdaa2a9f352b0445bafbc79ca799dce4d.pdf

https://www.bbc.com/news/health-42357257

https://www.ibm.com/blogs/research/2017/1/ibm-5-in-5-our-words-will-be-the-windows-to-our-mental-health/

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10057336/

https://doi.org/10.48550%2FarXiv.2110.14731

https://journals.plos.org/plosmedicine/article?id=10.1371/journal.pmed.0020124

https://scholar.google.com/scholar_lookup?journal=Proceedings+of+the+IEEE+15th+International+Symposium+on+Biomedical+Imaging&title=How+to+fool+radiologists+with+generative+adversarial+networks?+A+visual+turing+test+for+lung+cancer+diagnosis&author=M.J.M.+Chuquicusma&author=S.+Hussein&author=J.+Burt&author=U.+Bagci&pages=240-244&

https://pubmed.ncbi.nlm.nih.gov/23443421

https://www.nuffieldbioethics.org/assets/pdfs/Artificial-Intelligence-AI-in-healthcare-and-research.pdf

https://link.springer.com/article/10.1007/s10916-017-0760-1

Polypharmacy in the Aging Population: Balancing Medication, Humanity, and Care

First published 2023

Polypharmacy, the concurrent use of multiple medications by a patient, has become increasingly prevalent, especially among older adults. As societies worldwide witness a surge in their aging populations, the issue of polypharmacy becomes even more pressing. In many countries, a significant portion, often exceeding 20%, of the population is aged 65 and above. This demographic shift has several implications, not the least of which is the complex and multifaceted issue of medication management.

Women, who constitute a majority of the elderly population, become even more predominant as age advances. This gender skew in the older demographic is vital to consider, especially when discussing drug safety. Older women might face heightened susceptibility to drug-related harm compared to their male counterparts. Such vulnerabilities can arise from pharmacokinetic and pharmacodynamic changes. These distinctions emphasise the necessity of tailoring medication regimes to accommodate these differences, making medication optimisation for older women a priority.

The ramifications of polypharmacy extend beyond the individual. The risks associated with polypharmacy, which include inappropriate or unsafe prescribing, can be profoundly detrimental. Recognizing these dangers, the World Health Organization (WHO) initiated the “Medication Without Harm” campaign as its third Global Patient Safety Challenge. Launched in 2017, this initiative seeks to halve avoidable medication harm over a span of five years. Its inception underscores the global nature of the polypharmacy issue and the consequent need for concerted, international attention.

Deprescribing, a strategy centered on judiciously reducing or discontinuing potentially harmful or unnecessary medications, emerges as a crucial countermeasure to polypharmacy’s perils. Implementing a systematic approach to deprescribing can not only improve an older individual’s quality of life but also significantly decrease the potential for drug-related harm. This is particularly relevant for older women, emphasising once again the need to incorporate sex and gender considerations into prescribing and deprescribing decisions.

While much of the research and initiative focus has been directed towards high-income countries, the principles of safe medication prescribing are universally relevant. The interaction between biological (sex) and sociocultural (gender) factors plays a pivotal role in determining medication safety. Understanding and accounting for these nuances can greatly enhance the process of prescribing or deprescribing medications for older adults. For clinicians to truly optimise the care of their older patients, a holistic approach to medication review and management is essential. Such an approach not only emphasises the individual’s unique needs and vulnerabilities but also incorporates broader considerations of sex and gender, ensuring a comprehensive and informed decision-making process.

The intricacies of polypharmacy and its management, especially in older adults, bring to light the broader challenges facing our healthcare system. As the elderly population grows, so does the prevalence of chronic diseases. These ailments often necessitate multiple medications for management and symptom relief. Consequently, the line between therapeutic benefit and potential harm becomes blurred. The balance between ensuring the effective management of various health conditions while avoiding medication-induced complications is a tightrope that clinicians must walk daily.

Deprescribing is not just about reducing or stopping medications; it’s about making informed decisions that prioritise the patient’s overall well-being. This involves a thorough understanding of each drug’s purpose, potential side effects, and how they interact with other medications the patient might be taking. But beyond that, it also demands an in-depth conversation between the patient and the healthcare provider. Patients’ beliefs, concerns, and priorities must be integral to the decision-making process. This collaborative approach ensures that the process of deprescribing respects the individual’s values and desires, moving away from a solely clinical standpoint to one that incorporates patient autonomy and quality of life.

Furthermore, the integration of technology and data analytics can play a significant role in enhancing medication safety. Electronic health records, when used effectively, can offer a comprehensive view of a patient’s medication history, allowing clinicians to identify potential drug interactions or redundancies. Predictive analytics, fed with vast amounts of data, might also identify patients at high risk for drug-related harms, thereby aiding in early interventions. The digital age, with its myriad tools, has the potential to revolutionise the way we approach polypharmacy, offering more precise, personalised, and proactive care.

However, while technology can assist, it cannot replace the fundamental human elements of care — empathy, understanding, and communication. The process of deprescribing, or even the decision to continue a medication, often involves deep emotional and psychological dimensions for patients. Fear of relapsing into illness, concerns about changing what seems to be working, or even the symbolic acknowledgment of aging and frailty can be profound considerations for many. Clinicians must be attuned to these subtleties, approaching each case with sensitivity and a genuine commitment to understanding the person behind the patient.

Moreover, education and continuous training are pivotal. Healthcare professionals must stay updated on the latest research, guidelines, and best practices related to medication management in older adults. This not only pertains to the intricacies of pharmacology but also to the soft skills of patient communication, shared decision-making, and ethical considerations. A well-informed and compassionate healthcare provider is a cornerstone of safe and effective medication management.

In conclusion, addressing the challenges of polypharmacy in an aging global population requires a multi-faceted approach. While the scientific and technical aspects are undeniably crucial, the human elements — understanding, collaboration, and compassion — remain at the heart of optimal care. As we navigate the complexities of medication management, it is essential to remember that at the centre of every decision is an individual, with their hopes, fears, and aspirations. Prioritising their holistic well-being, informed by both science and humanity, is the ultimate goal.

Links

https://www.who.int/news-room/fact-sheets/detail/ageing-and-health

https://www.who.int/publications/i/item/WHO-HIS-SDS-2017.6

https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1019475/good-for-you-good-for-us-good-for-everybody.pdf

https://www.agedcarequality.gov.au/news-centre/newsletter/quality-bulletin-36-december-2021

https://www.nia.nih.gov/news/dangers-polypharmacy-and-case-deprescribing-older-adults

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9450314/

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4239968/

https://bmcgeriatr.biomedcentral.com/articles/10.1186/s12877-022-03408-6

Navigating the Complex Landscape of AI-Augmented Labour

First published 2023

Artificial intelligence (AI) has transformed the world, automating tedious tasks and pioneering breakthroughs in various sectors like healthcare. This rapid transformation promises unprecedented productivity boosts and avenues for innovation. However, as AI integrates deeper into the fabric of our daily lives, it has become evident that its benefits are not distributed evenly. Its impact could exacerbate existing social and economic disparities, particularly across demographics like race, making the dream of an equitable AI future elusive.

Today, many aspects of our lives, ranging from mundane tasks to critical decision-making in healthcare, benefit from AI’s potential. But the growing chasm of inequality resulting from AI’s penetration has sparked concerns. Business and governmental leaders are under mounting pressure to ensure AI’s advantages are universally accessible. Yet, the challenges seem to evolve daily, leading to a piecemeal approach to solutions or, in some instances, no solutions at all. Addressing AI-induced inequalities necessitates a proactive, holistic strategy.

A recent survey highlighted this division starkly. Out of the participants, 41% identified as “AI Alarmists”, those who harbour reservations about AI’s encroachment into the workplace. On the other hand, 31% were “AI Advocates” who staunchly support AI’s incorporation into labour. The remaining 28% were “AI Agnostics”, a group that views AI’s integration with balanced optimism and skepticism. Even though these figures originate from a limited online survey, they underscore the absence of a singular mindset on AI’s value in labour. The varying perspectives on the uses and users of AI provide a glimpse into the broader societal evaluations, which the researchers aim to examine further in upcoming studies.

To pave the path for a more equitable AI future, policymakers and business leaders must first identify the underlying forces propelling AI-driven inequalities. A comprehensive framework that captures these forces is proposed while emphasising the intricate social mechanisms through which AI both creates and perpetuates disparity. This approach offers twofold advantages: it’s versatile enough to be applicable across varied contexts, from healthcare to art, and it sheds light on the often-unseen ways AI impacts the demand for goods and services, a crucial factor in the spread of inequality.

Algorithmic bias epitomises the technological forces. It arises when decision-making algorithms perpetually disadvantage certain groups. The implications of such biases can be disastrous, especially in critical sectors like healthcare, criminal justice, and credit scoring. Currently, natural language processing AI (reading written text and interpreting it for coding) can be a specific cause of unconscious biases in AI systems. For example, it can process medical documents and then code it as data that is then used to make inferences from large datasets. If an AI system interprets medical notes where there are well established human biases (such as disproportionate recording of particular questions), for example, towards African American or LGBTQ+ patients, the AI could then generate a link between these characteristics. These real-world biases will then be silently reinforced and multiplied, which could lead to systematic racial and homophobic biases in the AI system.

AI’s effects on supply and demand also intricately contribute to inequality. On the supply side, AI’s potential to automate and augment human labour can significantly reduce the costs of delivering some services and products. However, as research suggests, certain jobs, especially those predominantly held by Black and Hispanic workers, are more susceptible to automation.

On the demand side, AI’s integration into various professions affects people’s valuation of those services. Research indicates that professionals advertising AI-augmented services might be perceived as less valuable or less skilled.

A metaphor that aptly describes this scenario is a tripod. If one leg (force) is deficient, it destabilises the entire structure, compromising its function and value. For a truly equitable AI future, all forces must be robust and well-balanced.

Rectifying these disparities requires multifaceted strategies. Platforms offering AI-generated services should educate consumers about AI’s complementary role, emphasising that it enhances rather than replaces human expertise. While addressing algorithmic biases and automation’s side effects is vital, these efforts alone won’t suffice. Achieving an era where AI uplifts and equalises requires stakeholders – from industries to governments and scholars – to collaboratively devise strategies that champion human-centric and equitable AI benefits.

In summation, the integration of AI into various sectors, from healthcare to graphic design, promises immense potential. However, it’s equally essential to address the challenges that arise, particularly concerning biases and public perception. As our society navigates the AI-augmented landscape, the tripod metaphor is a poignant reminder that every aspect needs equal attention and support. Rectifying algorithmic biases, reshaping perceptions, and fostering collaboration between sectors are crucial steps towards a more inclusive and equitable AI future. Embracing these facets will not only unlock AI’s full potential but also ensure its harmonious coexistence with human expertise, leading us towards a future that benefits all.

Links

https://www.pewresearch.org/science/2023/02/22/60-of-americans-would-be-uncomfortable-with-provider-relying-on-ai-in-their-own-health-care/

Quantum Computing: Unlocking the Complexities of Biological Sciences

First published 2023

Quantum computing is positioned at the cutting-edge juncture of computational science and biology, promising revolutionary solutions to complex biological problems. The intertwining of advanced experimentation, theoretical advancements, and increased computing prowess have traditionally powered our understanding of intricate biological phenomena. As the demand for more robust computing infrastructure increases, so does the search for innovative computing paradigms. In this milieu, quantum computing (QC) emerges as a promising development, especially given the recent strides in technological advances that have transformed QC from mere academic intrigue to concrete commercial prospects. These advancements in QC are supported and encouraged by various global policy initiatives, such as the US National Quantum Initiative Act of 2018, the European Quantum Technologies Flagship, and significant efforts from nations like the UK and China.

At its core, quantum computing leverages the esoteric principles of quantum mechanics, which predominantly governs matter at the molecular scale. Particles, in this realm, manifest dual characteristics, acting both as waves and particles. Unlike classical computers, which use randomness and probabilities to achieve computational outcomes, quantum computers operate using complex amplitudes along computational paths. This introduces a qualitative leap in computing, allowing for the interference of computational paths, reminiscent of wave interference. While building a quantum computer is a daunting task, with current capabilities limited to around 50-100 qubits, their inherent potential is astounding. The term “qubit” designates a quantum system that can exist in two states, similar to a photon’s potential path choices in two optical fibres. It is this scalability of qubits that accentuates the power of quantum computers.

A salient feature of quantum computation is the phenomenon of quantum speedup. Simplistically, while both quantum and randomised computers navigate the expansive landscape of possible bit strings, the former uses complex-valued amplitudes to derive results, contrasting with the addition of non-negative probabilities employed by the latter. Determining the instances and limits of quantum speedup is a subject of intensive research. Some evident advantages are in areas like code-breaking and simulating intricate quantum systems, such as complex molecules. The continuous evolution in the quantum computing arena, backed by advancements in lithographic technology, has resulted in more accessible and increasingly powerful quantum computers. Challenges do exist, notably the practical implementation of quantum RAM (qRAM), which is pivotal for many quantum algorithms. However, a silver lining emerges in the form of intrinsically quantum algorithms, which are designed to leverage quintessential quantum features.

The potential applications of quantum computing in biology are vast and multifaceted. Genomics, a critical segment of the biological sciences, stands to gain enormously. By extrapolating recent developments in quantum machine learning algorithms, it’s plausible that genomics applications could soon benefit from the immense computational power of quantum computers. In neuroscience, the applications are expected to gravitate toward optimisation and machine learning. Additionally, quantum biology, which probes into chemical processes within living cells, presents an array of challenges that could be aptly addressed using quantum computing, given the inherent quantum nature of these processes. However, uncertainties persist regarding the relevance of such processes to higher brain functions.

In summation, while the widespread adoption of powerful, universal quantum computers may still be on the horizon, history attests to the fact that breakthroughs in experimental physics can occur unpredictably. Such unforeseen advancements could expedite the realisation of quantum computing’s immense potential in tackling the most pressing computational challenges in biology. As we venture further into this quantum age, it’s evident that the fusion of quantum computing and biological sciences could redefine our understanding of life’s most intricate mysteries.

Links

https://www.nature.com/articles/s41592-020-01004-3

https://ts2-space.webpkgcache.com/doc/-/s/ts2.space/en/decoding-the-quantum-world-of-biology-with-artificial-intelligence/

The Nature and Importance of Data Representation in Computers

First published 2023

The evolution and functionality of computers have always revolved around their innate ability to process and manage data. Deriving its name from the word “compute”, which means to calculate or work out, a computer is essentially a device that performs calculations. These calculations encompass a wide range of tasks such as arithmetic operations, sorting lists, or even something as intricate as determining the movement of a character in a game. To put it succinctly, a dictionary defines a computer as “an electronic device for storing and processing data, according to instructions given to it in a program.”

At the core of this machine’s function is its reliance on human-driven instructions. Without the blueprint provided by a human, in the form of programming, a computer remains dormant, unable to execute any task. It is only when these instructions are fed to the system that the computer transforms into a tool capable of astounding tasks. These tasks are carried out at a pace and accuracy unmatched by human capabilities, as demonstrated by the speed and precision in the video “Fujitsu Motherboard Production”, showcasing a computer-controlled system in action. This ability to process vast amounts of information at rapid speeds without errors is one of the many advantages computers have over humans.

Yet, it would be misguided to consider computers infallible or equivalent to human cognition. While they may possess powerful processors often referred to as “brains”, they do not possess the capability to think autonomously. Unlike humans, they are void of emotions, common sense, or the ability to ponder over problems using abstract thought. Hence, despite their strengths, there are countless everyday tasks that remain beyond their realm of capabilities.

In the vast world of computing, ‘data’ is the quintessence. Whether it’s numbers, text, graphics, sound, or video, it’s all data. However, to be understood and processed by a computer, this data needs to be translated into a format the computer can decode. This is where the concept of binary comes into play. Computers fundamentally operate by rapidly toggling circuits, or transistors, between two states – on and off. This two-state operation is symbolized by the numbers 1 and 0, respectively. These numbers, known as Binary digITS or BITS, are the foundational blocks of data representation in computers.

Software, data, and almost every operational aspect of a computer is encapsulated using these BITS. Recollecting history, early computers such as the 1975 Altair 8800 processed data in groups of 8 bits, which is termed as a byte. Bytes then became the primary metric for memory and storage. In today’s age, computers handle data ranging from Megabytes (MB) to Gigabytes (GB), and even more monumental units. By 2015, a staggering 8 Zettabytes of data was estimated to be stored across global computer systems.

While these numbers sound vast and incomprehensible, it is crucial to use appropriate units when conveying such information. Expressing data sizes in suitable units ensures clarity and comprehensibility. For instance, stating that a high-definition movie occupies 1,717,986,918 bytes is cumbersome and difficult to grasp. Instead, it’s far more comprehensible to express it as 1.6 Gb. Thus, correct representation not only simplifies understanding but ensures that information is delivered meaningfully and efficiently.

In conclusion, data representation lies at the heart of computing. From the earliest computers to today’s advanced systems, the binary system of 1s and 0s has remained the fundamental language of machines. This intricate mechanism allows computers to process vast amounts of information swiftly and accurately, solidifying their role as invaluable tools in today’s digital age. However, as we marvel at their capabilities, it is essential to remember that they still operate within the confines of human programming and lack the nuances of human thought and emotion. As the world continues to generate colossal amounts of data, the importance of understanding and efficiently conveying data units only grows. It serves as a testament to the symbiotic relationship between man and machine, where both are indispensable to the other’s success and progress.

Links

home.adelphi.edu/~siegfried/cs170/170l1.pdf

The NHS Digital Clinical Safety Strategy: Towards Safer and Digitally Enabled Care

First published 2023

Ensuring patient safety remains at the forefront of providing high-quality healthcare. Even with significant advancements in the realm of patient safety, the sobering reality is that numerous patients suffer injuries or even lose their lives due to safety issues every year. What’s even more alarming is that a staggering 83% of these harmful incidents are believed to be preventable.

Safe patient care is a complex composition created from the detailed interactions of human, technical, and systemic elements. As healthcare systems progress, healthcare professionals must continuously adapt, particularly when new digital solutions that could cause disruptions are integrated. Recognising the varied nature of this challenge, the digital clinical safety strategy, a project developed through collaboration between NHSX, NHS Digital, NHS England, and NHS Improvement, tackles the issue from two main angles. Firstly, it emphasises the critical need to ensure the intrinsic safety of the digital technologies being implemented. At the same time, these digital tools are viewed as potential answers to the current safety challenges within the healthcare sector.

In today’s digitally inclined world, certain technologies have already found widespread acceptance. Devices such as heart rate sensors, exercise trackers, and oximeters, collectively termed “wearables”, have become an integral part of our daily lives. Furthermore, the proliferation of health and fitness apps, evidenced by the fact that 1.7 billion people had downloaded one by 2018, is testament to their growing influence. Beyond assisting individuals in managing chronic conditions, these digital technologies play an indispensable role in healthcare delivery. A classic example of this is the use of electronic health records which, when combined with data mining techniques, yield valuable insights that can steer both clinical practice and policy-making.

However, as healthcare pivots towards a heavier reliance on digital means, ensuring the uninterrupted availability and unquestionable reliability of these technologies becomes paramount. It’s equally crucial that the digital interventions be tailored to match the unique preferences, needs, and digital literacy levels of individual patients, thus enhancing their overall experience.

The World Health Organization’s recent patient safety action plan has underscored the potential of digital technologies in bolstering patient safety. By improving patient access to electronic health records, we can potentially elevate the quality of care delivered, including minimising medication errors. Additionally, innovations such as artificial intelligence are making significant inroads in areas like medical imaging and precision medicine. Chatbots, another digital marvel, are transforming healthcare by providing a spectrum of services from disseminating medical information to offering mental health support.

Yet, the path to fully harnessing the power of digital technologies isn’t without its hurdles. A considerable portion of the population remains digitally disconnected, limiting their access to essential resources such as health information, education, and emerging care pathways. Furthermore, health information technology isn’t immune to glitches and can occasionally contribute to adverse patient outcomes. A study highlighting this risk found that out of 2267 patient safety incidents tied to health information technology failures in England and Wales, a significant 75% were potentially avoidable, with 18% causing direct harm to patients.

The onslaught of the covid-19 pandemic accelerated the pace of digital adoption in healthcare. In England, virtual consultations in primary care witnessed a twofold increase in the early days of the pandemic. Meanwhile, in the US, virtual appointments surged by a remarkable 154% during the last week of March 2020 when juxtaposed against the same period the previous year. These shifts, although driven by a global health emergency, hold promise for long-term benefits, encompassing improved continuity of care, cost reductions, and better clinical outcomes. Yet, the increased adoption of virtual care isn’t devoid of pitfalls. Challenges range from increased clinical uncertainties to the potential for security breaches.

The digital clinical safety strategy offers five key national action recommendations. These encompass the routine collection of information on digital clinical safety incidents, amplifying the access to and availability of digital clinical safety training, establishing a centralised digital clinical safety information hub, speeding up the adoption of digital technologies to monitor implanted medical devices, and cultivating evidence on the optimal ways to employ digital means for enhancing patient safety.

In conclusion, the recommendations encapsulated in the digital clinical safety strategy set the stage for a safer and more effective digitally enhanced healthcare future. However, success in this domain isn’t the sole responsibility of national safety leaders but demands a collaborative effort. It involves everyone, from patients and the general public to the healthcare workforce, collectively embedding a safety-first culture in healthcare. As we stand on the cusp of a digital healthcare revolution, it’s essential to remember that these recommendations are but the initial steps towards a safer, more efficient future, and frontline healthcare workers remain pivotal in bringing this vision to fruition.

Links

https://transform.england.nhs.uk/key-tools-and-info/digital-clinical-safety-strategy/

https://www.thelancet.com/journals/langlo/article/PIIS2214-109X(18)30386-3/fulltext

https://pubmed.ncbi.nlm.nih.gov/30605296/

https://kclpure.kcl.ac.uk/portal/en/publications/impact-of-ehealth-in-allergic-diseases-and-allergic-patients

https://www.who.int/teams/integrated-health-services/patient-safety/policy/global-patient-safety-action-plan

https://www.nature.com/articles/s41746-021-00418-3

https://pubmed.ncbi.nlm.nih.gov/27147516/

https://pubmed.ncbi.nlm.nih.gov/33323263/

https://pubmed.ncbi.nlm.nih.gov/32791119/

The Exploitation of Data in AI Systems

First published 2023

In the era of the Fourth Industrial Revolution, artificial intelligence (AI) stands as one of the most transformative technologies, touching almost every sector, from healthcare to finance. This revolutionary tool relies heavily on vast amounts of data, which helps train sophisticated models to make predictions, classify objects, or even diagnose diseases. However, like every technology, AI systems are not immune to vulnerabilities. As AI continues to integrate more deeply into critical systems and processes, the security of the data it uses becomes paramount.

One of the underexplored and potentially perilous vulnerabilities is the integrity of the data on which these models train. In traditional cyber-attacks, adversaries may target system infrastructure, attempting to bring down networks or steal information. But when it comes to AI, the nature of the threat evolves. Instead of simply disabling or infiltrating systems, adversaries can manipulate the very foundation upon which these systems stand: the data. This covert form of tampering, called ‘data poisoning’, presents a unique challenge because the attack is not on the system itself, but on its learning mechanism.

In essence, data poisoning corrupts the data in subtle ways that might not be immediately noticeable. When AI systems train on this tainted data, they can produce skewed or entirely incorrect outputs. This is especially concerning in sectors like healthcare, where decisions based on AI predictions can directly impact human lives. A country, large cooperation, small group, or an individual, could maliciously compromise data sources at the point of collection or processing, so that the model is trained on poisoned data, which could lead to the AI model incorrectly classifying or diagnosing patients. Imagine a scenario where medical data is deliberately tampered with to misdiagnose patients or mislead treatment plans. The repercussions could be life-threatening. As an extreme example, a large drug cooperation could poison data so that a risk score AI model would be likely to output a patient as a higher risk than they actually are. This would then enable the company to sell more drugs to those ‘High Risk’ patients.

Beyond the healthcare sector, the implications of data poisoning ripple out across various industries. In finance, maliciously altered data can result in fraudulent transactions, market manipulations, and inaccurate risk assessments. In the realm of autonomous vehicles, poisoned data might lead to misinterpretations of road scenarios, endangering lives. For the defense sector, compromised data could misinform crucial military decisions, leading to strategic failures. The breadth and depth of data poisoning’s potential impacts cannot be understated, given AI’s ubiquitous presence in modern society.

Addressing this challenge necessitates a multifaceted approach. First, there’s a need for stringent data validation protocols. By ensuring that only verified and legitimate data enters training sets, the chance of contamination decreases. Additionally, there is a need for constant vigilance and monitoring of AI systems to allow for the early detection of changes which may indicate data poisoning. Anomaly detection algorithms can be employed to scan for unusual patterns in data that might indicate tampering. Organisations should also embrace differential privacy techniques, which add a layer of noise to the data, making it difficult for attackers to reverse-engineer or poison it. Finally, continuous monitoring and retraining of AI models will ensure that they remain robust against evolving threats. By frequently updating models based on clean and recent data, any impacts of previous poisoning attacks can be mitigated.

Collaboration also stands as a potent weapon against data poisoning. By fostering a global community of AI researchers, practitioners, and policymakers, best practices can be shared, and standardised protocols can be developed. Such collaborative efforts can lead to the establishment of universally recognized benchmarks and evaluation metrics, ensuring the security and reliability of AI models irrespective of their application. Additionally, regulatory bodies must step in, imposing penalties on entities found guilty of data tampering and promoting transparency in AI deployments.

In the age of data-driven decision-making, ensuring the integrity of the information fueling our AI systems is of paramount importance. Data poisoning, while a subtle and often overlooked threat, has the potential to derail the very benefits that AI promises. By acknowledging the gravity of this issue and investing in preventive and corrective measures, society can harness the power of AI without being beholden to its vulnerabilities. As with every technological advancement, vigilance, adaptation, and collaboration will be the keys to navigating the challenges that arise, ensuring a safer and more prosperous future for all.

Links

https://www.turing.ac.uk/sites/default/files/2019-06/understanding_artificial_intelligence_ethics_and_safety.pdf

https://www.elibrary.imf.org/view/journals/087/2021/024/article-A001-en.xml

https://www.datrics.ai/articles/anomaly-detection-definition-best-practices-and-use-cases

https://www.mdpi.com/2624-800X/3/3/25

https://www.nationaldefensemagazine.org/articles/2023/7/25/defense-department-needs-a-data-centric-digital-security-organization

Exploring Challenges of AI Implementation in Healthcare: A Study from Sweden

First published 2023

The advent of artificial intelligence (AI) in healthcare promises groundbreaking advancements, from early diagnosis to personalised treatment and improved patient outcomes. However, its successful integration remains fraught with challenges. A deeper understanding of these challenges and potential solutions is necessary for the effective deployment of AI in the healthcare sector.

A Swedish research study, conducted in 2021, sought to examine these challenges. The study was based on explorative qualitative research, where individual, semi-structured interviews were conducted over an eight-month span, from October 2020 to May 2021, with 26 healthcare leaders in a regional setting. A qualitative content analysis methodology was employed, adopting an inductive approach, to extract meaningful patterns from the data.

The analysis of collected data revealed three distinct categories of challenges in the context of AI integration in healthcare. The first category, conditions external to the healthcare system, concentrated on challenges stemming from factors beyond the control of the healthcare system. These include regulatory issues, societal perceptions of AI, and various ethical concerns. The second category, capacity for strategic change management, highlighted concerns raised by leaders about the organisation’s ability to manage strategic changes effectively. Influential factors here included infrastructure readiness, technology compatibility, and the prevailing organisational culture. The third category, transformation of healthcare professions and healthcare practice, focused on the expected disruptions AI might cause in the roles and responsibilities of healthcare professionals. In this regard, leaders expressed apprehensions about potential resistance from healthcare staff, the need for retraining, and the changing nature of patient care due to the integration of AI.

Healthcare executives thereby identified numerous challenges when it comes to integrating AI both within the healthcare infrastructure and in their specific organisations. These obstacles range from external environmental factors to the intrinsic ability to manage transformative change, as well as shifts in healthcare roles and practices. The study concluded that healthcare workers can lack trust, and that the AI Systems were not well implemented, causing a reluctance in adoption of the AI. The researchers proposed the “need to develop implementation strategies across healthcare organisations to address challenges to AI-specific capacity building”.

The findings emphasise the importance of crafting strategies tailored to AI integration across healthcare institutions. Moreover, regulatory frameworks and policies should be in place to guide the proper formation and deployment of these AI strategies. Effective implementation necessitates dedicating ample time and resources, and fostering collaboration among healthcare providers, regional governing bodies, and relevant industry partners.

However, the study was not without its limitations. The reliance on just 26 healthcare professionals can be seen as a significant constraint, given the multidimensional and expansive nature of AI in healthcare. Such a limited dataset can inherently introduce biases and may not capture the broader perspective or concerns of the entire community. There is also no evidence to suggest how these 26 professionals were chosen, or whether they had any conflicts of interests. The study reports that it was conducted in one region of the country and so, any regional differences of understanding and perception AI, can cause issues when these conclusions are extrapolated to a national level. Furthermore, there are significant differences in countries’ healthcare systems globally, and so, conclusions of system perfectly fit for purpose in Sweden, may have no relevance whatsoever to application within the healthcare systems of other countries.

Further compounding these concerns is the quality of the journal in which the study was published. Impact factor measures the frequency with which an average article in a journal is cited over a set period of time. A score of 10 or higher is considered excellent. The ‘EMC Health Services Research’ journal’s impact factor stands at a modest 2.9. While respectable, this isn’t remarkably high, suggesting that the study’s findings might not carry the weight that those published in more prestigious journals might.

In conclusion, while the study brought forward valuable insights from a regional Swedish perspective, it’s essential to consider its limitations when forming a comprehensive understanding. The integration of AI into healthcare is a complex endeavour, with challenges both intrinsic to the healthcare system and in the broader societal context. Addressing these challenges requires collaborative efforts among healthcare entities, policymakers, and industry stakeholders. The investment in time, resources, and well-guided strategies, complemented by clear laws and policies, is paramount for the seamless and effective integration of AI in healthcare.

Links

https://bmchealthservres.biomedcentral.com/articles/10.1186/s12913-022-08215-8

Revolutionising Patient Data: The Role of Coding and AI in the NHS

First published 2023

The integration and effective management of patient data remain a considerable challenge in the National Health Service (NHS) practice. In hospital medicine and general practice, the information about patients lies in numerous databases that are often poorly unified and as a result, patient data is often hard or impossible to find. The vast oceans of data that the NHS has to manage require systematic organisation and easy retrieval mechanisms to ensure that medical professionals can access the information they need promptly and accurately.

Coding plays a pivotal role in addressing these challenges. A 2016 study found that doctors spent up to 52.9% of their time working on Electronic Patient Records (EPR), with much of this time spent ‘Coding’ (adding unique marker codes to patients’ records to clearly identify their conditions, treatments, and any other relevant information). The purpose of coding extends beyond mere organisation. It acts as a linguistic bridge, allowing various databases and systems within the NHS to communicate seamlessly. By assigning unique codes, a standard language is established, enabling different databases to become more integrated, which improves the accessibility and unification of patient information.

Furthermore, the significance of coding goes hand in hand with clinical utility. Coding is important to clearly surface and make clear significant diagnoses and treatments in the health records. By doing this, coding assists medical professionals by enhancing the visibility of crucial patient information. This optimises clinical workflows, as clinicians can easily find the data necessary to inform medical decisions, ultimately improving patient outcomes.

Moreover, coding facilitates better communication and alert systems across different healthcare settings. This helps highlight medical conditions to treating clinicians in different healthcare settings and can enable rapid alerting to clinicians, for example, if a patient is showing signs of sepsis (a life-threatening reaction to an infection). Such capabilities illustrate the crucial role that coding plays in real-time clinical decision-making, supporting clinicians in delivering timely and appropriate care.

Beyond manual data entry, advancements in technology present promising avenues to improve the coding process within the NHS. There are high complexity AI developments that can use natural language processing and machine learning techniques to read and interpret electronic text at a very large scale and then code the data. Such innovations come with the potential to dramatically streamline and refine the data coding process. At a trial at King’s College Hospital, it was found that a clinical coding AI called ‘Cogstack’ was able to triple the depth of coding within the space of a month. The implications of these technological breakthroughs for the NHS cannot be overstated. If these systems were implemented nationally, coding capacity will significantly increase, leading to more clinical hours for doctors to see patients. Additionally, it would also increase the overall efficiency and quality of care received by patients, by creating safer, higher quality data for care. These AI-driven tools, hence, are not just luxury add-ons but necessary instruments that can revolutionise the way patient data is managed and used within the NHS framework.

In conclusion, coding in NHS practice acts as a vital tool in improving data accessibility, integration, and clinical utility. Through coding, the NHS can overcome the challenges posed by disparate databases and improve the efficiency and effectiveness of patient care delivery. Thus, coding is not just a technical process but a clinical imperative that underpins the functionality and responsiveness of healthcare services within the NHS.

Links

https://digital.nhs.uk/developer/guides-and-documentation/building-healthcare-software/clinical-coding-classifications-and-terminology

https://www.nuance.com/asset/en_uk/campaigns/healthcare/pdfs/nuance-report-clinical-coder-en-uk.pdf

https://transform.england.nhs.uk/media/documents/NHSX_AI_report.pdf

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6510043/

https://pubmed.ncbi.nlm.nih.gov/31463251/

Ethical and Practical Dimensions of Big Data in the NHS

First published 2022

Understanding the concept of Big Data in the field of medicine is relatively straightforward: it involves using extensive volumes of medical information to uncover trends or correlations that may not be discernible in smaller datasets. However, one might wonder why Big Data hasn’t been more widely applied in this context in the NHS. What sets industries like Google, Netflix, and Amazon apart, enabling them to effectively harness Big Data for providing precise and personalised real-time information based on online search and purchasing activities, compared to the National Health Service?

An examination of these thriving industries reveals a key distinction: they have access to data that is freely and openly provided by customers and is delivered directly and centrally to the respective companies. This wealth of detailed data encompasses individual preferences and aversions, facilitating accurate predictions for future online interactions.

Could it be feasible to use extensive volumes of medical data, derived from individual patient records, to uncover new risks or therapeutic possibilities that can then be applied on a personalised level to enhance patient outcomes? When we compare the healthcare industry to other sectors, the situation is notably distinct. In healthcare, medical records, which contain highly sensitive personal information, are carefully protected and not openly accessible. Typically, data remains isolated within clinic or hospital records, lacking a centralised system for sharing that would enable the rapidity and scale of data necessary to fully harness Big Data techniques. Medical data is also intricate and less readily “usable” in comparison to the data provided to major corporations, often requiring processing to render it into a readily usable format. Additionally, the technical infrastructure required for the movement, manipulation, and management of medical data is not readily accessible.

In a general sense, significant obstacles exist in terms of accessing data, and these obstacles encompass both philosophical and practical dimensions. To enhance the transformation of existing data into novel healthcare solutions, several aspects must be tackled. These encompass, among other things, the gathering and standardisation of diverse datasets, the careful curation of the resulting refined data, securing prior informed consent for the use of de-identified data, and the capacity to offer these datasets for further use by the healthcare and research communities.

To gain a deeper understanding of the opportunities within the clinical field and why the adoption and adaptation of these techniques haven’t been a straightforward transfer from other industries, it’s beneficial to examine both the similarities and distinctions between clinical Big Data and data used in other sectors. Industries typically work with what can truly be labeled as Big Data, characterised by substantial volume, rapid velocity, and diversity, but often exhibit low information density. These data are frequently freely obtained, stemming from an individual’s incidental digital activities in exchange for services, serving as a proxy indicator for specific behaviors that enable the anticipation of patterns, trends, and outcomes. Essentially, such data are acquired at the moment services are accessed, and they either exist or do not exist.

Comparable data can be found in clinical settings as well. For instance, during surgery, there is the continuous monitoring of physiological parameters through multiple devices, generating substantial volume, high velocity, and diverse data that necessitate real-time processing to identify data falling outside predefined thresholds, prompting immediate intervention by attending clinicians. On the other hand, there are instances of lower-volume data, such as the day-to-day accumulation of clinical test results, which contribute to updated diagnoses and medical management. Likewise, the analysis of population-based clinical data has the capability to forecast trends in public health, like predicting the timing of infectious disease outbreaks. In this context, velocity offers “real-time” prospective insights and allows for trend forecasting. The origin of the data is attributable to its source, whether it be a patient in the operating room or a specific geographical population experiencing the winter flu season.

The primary use of this real-time information is to forecast future trends through predictive modeling, without attempting to provide explanations for the findings. However, a more immediate focus of Big Data is the extensive clinical data already stored in hospitals, aiming to address the question of why specific events are occurring. These data have the potential, provided they can be effectively integrated and analysed, to offer insights into the causes of diseases, enable their detection and diagnosis, guide treatment and management, and facilitate the development of future drugs and interventions.

To assimilate this data, substantial computing power well beyond what an individual can manage is required, thus fitting the definition of Big Data. The data will largely be population-specific and then applied to individuals (e.g., examining patient groups with different disease types or processes to gain new insights for individual benefit). Importantly, this data will be collected retrospectively, rather than being acquired prospectively.

Lastly, while non-medical Big Data has often been incidental, freely available, and of low information density, clinical Big Data will be intentionally gathered, incurring costs (borne by someone), and characterised by high information density. This is more akin to business intelligence, where Big Data techniques are needed to derive measurements and detect trends (not just predict them) that would otherwise remain concealed or beyond human inspection alone.

Patient data, regardless of its nature, often seems to be associated with the medical institutions that hold it. However, it’s essential to recognise that these institutions function as custodians of the data; the data itself belongs to the patients. Access to and use of this data beyond clinical purposes necessitate the consent of the patients. This immediately poses a challenge when it comes to the rapid use of the extensive data already contained in clinical records.

While retrospective, hypothesis-driven research can be conducted on specific anonymised data, as is common in research, it’s important to note that once a study concludes, the data should ideally be deleted. This approach contradicts the principles of advancing medical knowledge, particularly when employing Big Data techniques that involve thousands to millions of data points requiring significant processing. Losing such valuable data at the conclusion of a project is counterproductive.

Prospective patient consent to store and use their data offers a more robust model, enabling the accumulation of substantial datasets that can be subsequently subjected to hypothesis-driven research questions. Although foregoing the use of existing retrospective data may appear wasteful, the speed (velocity) at which new data are generated in the NHS makes consented data far more valuable. Acquiring patient consent, however, often necessitates on-site personnel to engage with patients. Alternatively, options like patients granting blanket consent for data usage may be viable, provided that such consent is fully informed.

This dilemma has come to the forefront due to the implementation of the EU General Data Protection Regulation (GDPR) in 2018, triggering an international discourse on the sharing of Big Data in healthcare. In 2021, the UK government commissioned the ‘Goldacre Review’ into how to create big data sets, and how to ensure the “efficient and safe use of health data for research and analysis can benefit patients and the healthcare sector”. The review concluded that it is essential to invest in safe and trusted platforms for data and high-quality data curation to allow researchers and AI creators to realise the potential of the data. This data “represents deeply buried treasure, that can help prevent suffering and death, around the planet, on a biblical scale.”

Following the Goldacre Review, the UK government launched the ‘National Data Strategy’, which supports the creation of high-quality big data, and ‘Data Saves Lives’, which specifically sets out to “make better use of healthcare data and to save lives”. The ‘Data Saves Lives’ initiative exemplifies the progressive approach the UK has taken towards harnessing the power of Big Data in healthcare. Recognising the transformative potential of large-scale medical datasets, the initiative seeks to responsibly leverage patient data to drive innovations in medical research and clinical care. There’s a recognition that while industries like Netflix or Amazon can instantly access and analyse user data, healthcare systems globally, including the NHS, must manoeuvre through more complex ethical, legal, and logistical terrains. Patient data is not just another statistic; it is a deeply personal narrative that holds the key to both individual and public health solutions. Ensuring its privacy, obtaining informed consent, and simultaneously making it available for meaningful research is a balancing act, one that the NHS is learning to master.

In conclusion, the use of Big Data in the realm of healthcare differs significantly from its application in other industries, primarily due to the sensitive nature of the data and the ethical implications of its use. The potential benefits of harnessing this data are immense, from individualised treatments to large-scale public health interventions. Yet, the complexities surrounding its access and use necessitate a thoughtful, patient-centric approach. Initiatives like ‘Data Saves Lives’ signify the healthcare sector’s commitment to unlocking the potential of Big Data, while ensuring patients remain at the heart of the conversation. As the NHS and other global healthcare entities navigate this promising frontier, the underlying ethos must always prioritise patient welfare, trust, and transparency.

Links

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6502603/

https://www.gov.uk/government/publications/better-broader-safer-using-health-data-for-research-and-analysis

https://www.gov.uk/government/publications/uk-national-data-strategy/national-data-strategy

https://digital.nhs.uk/services/national-data-opt-out/understanding-the-national-data-opt-out/confidential-patient-information

Big Data: Two Sides of An Argument

First published 2022

In this contemporary era, technology is continually advancing, leading to the accumulation of personal data in the form of numerous digits and figures. The term ‘Big Data’ refers to data that contains greater variety, arriving in increasing volumes and with more velocity, which is often referred to as the three Vs. In simpler terms, big data encompasses extensive and intricate datasets, particularly those stemming from novel data sources. These datasets are of such immense volume that conventional data processing software is inadequate for handling them. Nevertheless, these vast pools of data can be harnessed to solve business challenges that were previously insurmountable. For example, ‘Big Data’ is required for AI to work properly. For the AI algorithms to correctly recognise and ‘Intelligently’ understand patterns and correlations, they need access to a huge amount of data. It is important that this big data has the correct ‘Volume, Velocity, and Variety’ (3 Vs).

Many individuals are concerned about safeguarding this intangible yet highly valuable aspect of their lives. Given the profound importance people place on their privacy, numerous inquiries emerge regarding the ultimate custodians of this information. What if it came to light that corporations were exploiting loopholes in data privacy regulations to further their own financial gains? Two articles examine the concept of exposing private information: “Private License Plate Scanners Amassing Vast Databases Open to Highest Bidders” (RT, 2014) and “Who Has The Right to Track You?” (David Sirota, 2014). While unveiling how specific businesses profit from the scanning of license plates and the collection of individuals’ personal data, both authors effectively employ a range of persuasive techniques to sway their readers.

Pathos serves as a rhetorical device that aims to evoke emotional responses from the audience. In the second article, titled “Who Has The Right to Track You?”, David Sirota adeptly employs pathos to establish a strong emotional connection with his readers. Starting with the article’s title and continuing with questions like, “Do corporations have a legal right to track your car?”, he deliberately strikes a chord of apprehension within the reader. Sirota uses phrases such as “mass surveillance” and “mass photography,” repeatedly emphasising the accumulation of “millions of license plate images” to instill a sense of insecurity in the reader.

Throughout the article, he maintains a tone of genuine concern and guardianship on his part, often addressing the reader in the second person and assuring them that he is an advocate for “individuals’ privacy rights.” This approach enables him to forge a connection with the audience, making them feel as though he is actively looking out for their well-being.

The author of the second article, RT, employs pathos to engage with readers from a contrasting standpoint. He employs phrases such as “inhibiting scanners would…create a safe haven…for criminals” and “reduce the safety of our officers, and it could ultimately result in lives lost”. These statements are crafted to instill fear in the audience, persuading them to consider the necessity of sacrificing their privacy for the sake of law enforcement’s ability to safeguard them. RT avoids using the term “mass surveillance” and instead employs more lighthearted expressions like “the scanners ‘scoop up 1,800 plates a minute'”. By using this less threatening language, such as “scoop up,” the author intends to alleviate any concerns readers may have about this practice, portraying it in a more benign light.

Both authors employ the rhetorical device of logos, which involves using logic and reason to persuade their respective audiences. Sirota, for instance, provides data such as the cameras in question “capturing data on over 50 million vehicles each month” and their widespread use in major metropolitan areas. This substantial data serves to evoke discomfort in the reader and cultivate a fundamental distrust in these surveillance systems. Sirota further invokes reason by highlighting that valuable information like “household income” is being collected to enable companies to target consumers more effectively. Through this logical approach, he underscores the ethical concerns regarding how companies disseminate such information to willing clients.

In contrast, RT employs logos to assuage the reader’s concerns about data collection. He emphasises that the primary users of this data collection are “major banks, tracking those defaulting on loans,” and the police, who use it to apprehend criminals. Essentially, RT is conveying to the reader that as long as they are not engaged in wrongdoing, there should be no cause for alarm. Moreover, he reassures the reader that illicit use of scanning procedures is an uncommon occurrence, citing an environment owner who states, “If we saw scanning like this being done, we would throw them out”. This logical argument is designed to ease the reader’s anxieties about the potential misuse of data collection systems.

Both authors employ ethos in their persuasive efforts, with Sirota demonstrating a stronger use of this rhetorical appeal. One factor contributing to the weakness of the first article is the credibility of its sources. In the RT article, the sources often appear to originate from heavily biased sources, such as the large corporations themselves. For instance, the person quoted as stating, “I fear that the proposed legislation would essentially create a safe haven in the Commonwealth for certain types of criminals, it would reduce the safety of our officers, and it could ultimately result in lives lost,” is not a law enforcement officer, attorney, or legislator; rather, it is Brian Shockley, the vice president of marketing at Vigilant, the corporate parent of Digital Recognition. It is problematic for the reader to be frightened into relinquishing their privacy by a corporation that stands to profit from it.

In contrast, Sirota cites sources with high credibility, or extrinsic ethos, throughout his article. He quotes ACLU attorney Catherine Crump, who states: “‘One could argue that the privacy implications of a private individual taking a picture of a public place are significantly different from a company collecting millions of license plate images…there may be a justification for regulation.” Sirota presents a relatable source representing the public’s interests from a legal perspective, rather than one aligned with a corporation seeking to gain from the situation.

The balance between corporate and national security interests on one hand, and individual privacy and rights on the other, continues to be a significant subject in our increasingly tech-driven society. The authors of the articles examined in this discussion skillfully employed ethos, pathos, and logos to build their cases regarding the use of private license plate scanners. Numerous journalists and news outlets have also contributed their perspectives on this matter, aiming to educate the public about both sides of the argument. While journalists and writers may present a particular viewpoint, it ultimately falls upon the reader to carefully contemplate all the ramifications of the debate.

Links

https://h2o.ai/wiki/big-data/

https://www.rt.com/usa/license-scanners-private-database-046/

https://inthesetimes.com/article/do-companies-have-a-right-to-track-you

Digital Health: Improving or Disrupting Healthcare?

First published 2022; revised 2023

In recent years, the integration of digital technology into the healthcare sector has led to a transformative shift in how medical care is delivered and managed. This phenomenon, often referred to as “digital health,” encompasses a wide range of technological advancements, including electronic health records, telemedicine, wearable devices, health apps, and artificial intelligence. As the healthcare industry grapples with the complexities of this digital revolution, a pressing question emerges: is digital health primarily improving or disrupting care? This leads to questions about the multifaceted impact of digital health on healthcare systems, patients, and professionals, ultimately suggesting that while challenges exist, the potential benefits of digital health far outweigh its disruptive aspects.

One of the most significant advantages of digital health is its potential to improve the quality and accessibility of care. Electronic health records (EHRs) have streamlined the process of storing and sharing patient information among healthcare providers, reducing the chances of errors and ensuring more coordinated care. This enhanced communication promotes patient safety and can lead to better health outcomes.

Telemedicine, a subset of digital health, has revolutionised the way healthcare is delivered. It enables remote consultations, making medical expertise accessible to individuals who may have previously faced geographical or logistical barriers to care. This is especially crucial in rural or underserved areas, where access to specialised medical services might be limited.

Wearable devices and health apps empower patients to monitor their health in real time, providing valuable insights that can promote preventive care and early intervention. For instance, individuals with chronic conditions like diabetes can track their blood sugar levels and receive alerts when they deviate from the normal range. This not only keeps patients informed about their health but also enables healthcare providers to tailor treatment plans more effectively.

Artificial intelligence (AI) is another area where digital health is making substantial strides. Machine learning algorithms can analyse large datasets to identify patterns and predict disease outbreaks, thereby improving public health surveillance. Additionally, AI-powered diagnostic tools assist healthcare professionals in interpreting medical images with higher accuracy, aiding in early disease detection.

While the potential benefits of digital health are undeniable there are also challenges and disruptions that need to be addressed. Privacy and security concerns are prominent issues as the collection and storage of vast amounts of personal health data raise the risk of unauthorised access and breaches. Ensuring robust cybersecurity measures is imperative to protect patients’ sensitive information.

Another concern is the potential for a digital divide, where certain populations, especially older adults or those with limited technological literacy, may be left behind due to difficulties in adopting and using digital health tools. This could exacerbate healthcare disparities rather than alleviate them.

Furthermore, the rapid pace of technological innovation can sometimes outpace the ability of regulatory frameworks to keep up. This can lead to issues related to the quality, safety, and accuracy of digital health technologies. Clear guidelines and standards need to be established to ensure that digital health solutions are evidence-based and reliable.

In conclusion, the integration of digital health into the healthcare sector represents a transformative shift that is both improving and disrupting care. While challenges such as privacy concerns and the potential for a digital divide exist, the potential benefits of digital health are profound. From enhancing communication through EHRs and telemedicine to empowering patients with real-time health monitoring and leveraging AI for diagnostics, digital health has the potential to revolutionise healthcare delivery and improve patient outcomes. To maximise its benefits, it is essential for stakeholders to collaborate in addressing challenges, implementing robust cybersecurity measures, and establishing clear regulatory guidelines. With a balanced approach, digital health can ultimately lead to a more efficient, accessible, and patient-centered healthcare system.

Links

https://www.fiercehealthcare.com/digital-health/digital-health-funding-settles-down-2023-fewer-deals-smaller-check-sizes

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2710605/

https://galendata.com/disadvantages-of-technology-in-healthcare/

https://www.dko-law.com/blog/doctors-too-dependent-on-medical-technology/

Turing’s Vision: Navigating the Landscape of Ethical and Safe AI

First published 2022; revised 2023

In the dawn of the artificial intelligence era, there is an imperative need to navigate the complexities of AI ethics and safety. Ensuring that AI systems are both safe and ethically sound is no longer just a theoretical concern but a pressing practical issue that affects the global threads of industry, governance, and society at large. Drawing insights from Leslie, D. (2019) in “Understanding artificial intelligence ethics and safety: A guide for the responsible design and implementation of AI systems in the public sector”, published by The Alan Turing Institute, this essay explores the varied dimensions of AI’s responsible design and implementation.

The Alan Turing Institute forges its position as an aspirational, world-leading hub that examines the technical intricacies that underpin safe, ethical, and trustworthy AI. Committed to fostering responsible innovation and pioneering research breakthroughs, the Institute aims to go beyond mere theoretical discourses. It envisions a future where AI not only advances in capabilities but also upholds the core values of transparency, fairness, robustness, and human-centered design. Such an ambition necessitates a commitment to advancing AI transparency, ensuring the fairness of algorithmic systems, forging robust systems resilient against external threats, and cultivating AI-human collaborations that maintain human control.

However, the quest to realise this vision is not an isolated endeavour. It requires broad, interdisciplinary collaborations, connecting the dots between technical experts, industry leaders, policy architects, and the public. Aligning with the UK government’s Industrial Strategy and meeting the burgeoning global demand for informed guidance in AI ethics, the Institute’s strategy serves as a blueprint for those committed to the responsible growth of AI. However, it’s essential to remember that the responsible evolution of AI is not just about mastering the technology but understanding its implications for the broader context of our society.

The dawn of the information age has been marked by an extraordinary convergence of factors: the expansive availability of big data, the unparalleled speed and reach of cloud computing platforms, and the maturation of intricate machine learning algorithms. This synergy has propelled us into an era of unmatched human potential, characterised by a digitally interwoven world where the power of AI stands as a beacon of societal improvement.

Already, we witness the profound impact of AI across various sectors. Essential social domains such as healthcare, education, transportation, food supply, energy, and environmental management have all been beneficiaries of AI-driven innovations. These accomplishments, however significant they may appear now, are perhaps only the tip of the iceberg. AI’s very nature, its inherent capability to evolve and refine itself with increased access to data and surging computing power, guarantees its continuous ascent in efficacy and utility. As we navigate further into the information age, it’s conceivable that AI will soon stand at the forefront, guiding the progression of critical public interests and shaping the contours of sustainable human development.

Such a vision, where AI aids humanity in addressing its most pressing challenges, is undeniably exhilarating. Yet, like any frontier technology that’s rapidly evolving, AI’s journey is fraught with pitfalls. A steep learning trajectory ensures that errors, misjudgments, and unintended consequences are not just possible but inevitable. AI, despite its immense promise, is not immune to these challenges.

Addressing these challenges is not a mere recommendation but a necessity. It is imperative to prioritise AI ethics and safety to ensure its responsible evolution and to maximise its public benefit. This means an in-depth integration of social and ethical considerations into every facet of AI deployment. It calls for a harmonised effort, requiring data scientists, product managers, data engineers, domain experts, and delivery managers to work in unison. Their collective goal? To align AI’s development with ethical values and principles that not only prevent harm but actively enhance the well-being of communities that come under its influence.

The emergence of the field of AI ethics is a testament to this necessity. Born out of a growing recognition of the potential individual and societal harms stemming from AI’s misuse, poor design, or unforeseen repercussions, AI ethics seeks to provide a compass by which we navigate the AI-driven future responsibly.

Understanding the evolution of AI and its implications requires us to first recognise the genesis of AI ethics. The eminent cognitive scientist and AI trailblazer, Marvin Minsky, once described AI as the art of enabling computers to perform tasks that, when done by humans, necessitate intelligence. This fundamental definition highlights a crucial aspect of the discourse surrounding AI: humans, when undertaking tasks necessitating intelligence, are held to standards of reliability, accuracy, and sound reasoning. We expect them to justify their decisions, and to act with fairness, equity, and reasonableness in their interactions.

However, the rise and spread of AI technologies have reshaped this landscape. As AI systems take over myriad cognitive functions, they introduce a conundrum. Unlike humans, these algorithmic processes aren’t directly accountable for their actions, nor can they be held morally responsible for the outcomes they produce. Essentially, while AI systems exhibit a form of ‘smart agency’, they lack inherent moral responsibility, creating a discernible ethical void.

Addressing this void has become paramount, giving birth to a host of frameworks within AI ethics. One such framework is the FAST Track Principles, which stands for Fairness, Accountability, Sustainability, and Transparency. These principles are designed to bridge the gap between AI’s capabilities and its intrinsic moral void. To foster an environment conducive to responsible AI development, it is vital that every stakeholder, from data scientists to policy experts, familiarises themselves with the FAST Track Principles. These principles should guide actions and decisions throughout the AI project lifecycle, underscoring the idea that creating ethical AI is a collective endeavor.

Delving deeper into the principle of fairness, one must remember that while AI systems might project a veneer of neutrality, they are ultimately products of human design. Humans, with all their inherent biases and contextual limitations, play a pivotal role in AI’s creation. At any stage of an AI project, from data extraction to model building, the spectres of human error, prejudice, and misjudgment can introduce biases. Moreover, AI systems often derive their accuracy by analysing data that might encapsulate age-old societal biases and discriminations, further complicating the fairness equation.

Addressing fairness in AI is far from straightforward. There isn’t a singular, foolproof method to eliminate biases or ensure fairness. However, by adopting best practices that focus on fairness-aware design and implementation, there’s potential to create systems that yield just and equitable outcomes. One foundational approach to fairness is the principle of discriminatory non-harm. It mandates that AI innovations should not result in harm due to biased or discriminatory outcomes. This principle, while seemingly basic, serves as a cornerstone, directing the development and deployment of AI systems towards a more equitable and fair future.

The Principle of Discriminatory Non-Harm sets forth that AI system designers and users should be deeply committed to reducing biases and preventing discriminatory outputs, especially when dealing with social or demographic data. This implies a few specific obligations. First, AI systems should be built upon data that is representative, accurate, and generalisable, ensuring “Data Fairness.” Second, the systems’ design should not include any variables, features, or processes that are morally objectionable or unjustifiable – this is “Design Fairness.” The systems should also be crafted to avoid producing discriminatory effects on individuals or groups – ensuring “Outcome Fairness.” Lastly, the onus is on the users to be adequately trained to use AI systems responsibly, embodying “Implementation Fairness.”

When considering the concept of Accountability in AI, the best practices for data processing as mentioned in Principle 6 of the Data Ethics Framework come to mind. However, the ever-evolving AI landscape brings forward distinct challenges, especially in public sector accountability. Two major challenges emerge: the “accountability gap” and the multifaceted nature of AI production processes. Automated decisions, inherently, are not self-explanatory. Unlike human agents, statistical models and AI’s underlying infrastructure don’t bear moral responsibility, creating a void in accountability. Coupled with this is the intricate nature of AI project deliveries involving a myriad of stakeholders, making it a daunting task to pinpoint responsibility if an AI system’s implementation has adverse consequences.

To address these challenges, it’s imperative to adopt a comprehensive approach to accountability that encompasses both Answerability and Auditability. Answerability stresses that human creators and users of AI systems should take full responsibility for the algorithmically-driven decisions. They should be ready to provide clear, coherent, and non-technical explanations for these decisions, ensuring that every stage of the AI process is accountable. Auditability, on the other hand, focuses on how to hold these AI system designers and implementers accountable. It emphasises the demonstration of both responsible design and use practices, and the justifiability of the outcomes.

Another critical pillar is Sustainability. AI system designers and users must be continually attuned to the long-term and transformative effects their technologies might have on individuals and society at large. This proactive awareness ensures that the systems not only address the immediate needs but also consider the long-term societal impacts.

In tandem with sustainability is Safety. Besides considering the broader social ramifications of an AI system, it’s essential to address its technical sustainability and safety. Given that AI operates in an unpredictable environment, achieving technical safety becomes a challenging task. However, the importance of building a safe and reliable AI system cannot be overstated, especially when potential failures could result in harmful consequences and erode public trust. To achieve this, emphasis must be placed on the core technical objectives of accuracy, reliability, security, and robustness. This involves rigorous testing, consistent validation, and frequent reassessment of the system. Moreover, effective oversight mechanisms need to be integrated into the system’s real-world operation to ensure that it functions safely and as intended.

The intrinsic challenges of accuracy in artificial intelligence systems can be linked to the inherent complexities and unpredictability of the real world. When trying to model this chaotic reality, it’s a significant task to ensure that an AI system’s predictions or classifications are precise. Data noise, which is unavoidable, combined with the potential that a model might not capture all aspects of the underlying patterns and changes in data over time, can all contribute to these challenges.

On the other hand, the reliability of an AI system rests on its ability to consistently function in line with its intended design and purpose. This means that if a system is deemed reliable, users can trust that its operations will adhere to its set specifications, bolstering user confidence in the safety and predictability of its outcomes.

AI systems also face threats on the security front. Security is not just about safeguarding an AI system from potential external threats but also ensuring that the system’s architecture remains uncompromised and that any data or information within it remains confidential. This integrity is paramount, especially when considering the potential adversarial threats that AI systems might face.

Robustness in AI, meanwhile, centres on an AI system’s ability to function effectively even under less than ideal conditions. Whether these conditions arise from intentional adversarial actions, human errors, or misalignments in automated learning objectives, the system’s ability to maintain its integrity is a testament to its robustness.

One of the more nuanced challenges that machine learning models face is the phenomenon of concept drift. When the historical data, which informs the model’s understanding, becomes outdated or misaligned with current realities, the model’s accuracy and reliability can suffer. Therefore, staying attuned to changes in the underlying data distribution is vital. Ensuring that the technical team is aware of the latest research on detecting and managing concept drift will be crucial to the continued success of AI projects.

Another pressing concern in the realm of AI is adversarial attacks. These attacks cleverly manipulate input data, causing AI models to make grossly incorrect predictions or classifications. The subtle nature of these perturbations can lead to significant ramifications, especially in critical systems like medical imaging or autonomous vehicles. Recognising these vulnerabilities, there has been a surge in research in the domain of adversarial machine learning, aiming to safeguard AI systems from these subtle yet disruptive inputs.

Equally concerning is the threat of data poisoning, where the very data that trains an AI system is tampered with, causing the system to generate inaccurate or harmful outputs. This kind of attack can be especially sinister as it might incorporate ‘backdoors’ into the system, which when triggered, can cause malfunctions. Therefore, beyond technical solutions, it becomes imperative to source data responsibly and ensure its integrity throughout the data handling process. The emphasis should be on responsible data management practices to ensure data quality throughout the system’s lifecycle.

In the world of artificial intelligence, the term “transparency” has taken on a nuanced and specialised meaning. While the everyday usage of the term typically evokes notions of clarity, openness, and straightforwardness, in AI ethics, transparency becomes even more multifaceted. One aspect of this is the capacity for AI systems to be interpretable. That is, those interacting with an AI system should be able to decipher how and why the system made a particular decision or acted in a certain way. This kind of transparency is about shedding light on the internal workings of the often enigmatic AI mechanisms, allowing for greater understanding and trust.

Furthermore, transparency isn’t limited to merely understanding the “how” and “why” of AI decisions. It also encompasses the ethical considerations behind both the design and deployment of AI systems. When AI systems are said to be transparent, it implies that they can be justified as ethical, unbiased, trustworthy, and safety-oriented both in their creation and their outcomes. This dual focus on process and product is vital.

In developing AI, teams are tasked with several responsibilities to ensure this two-tiered transparency. First, from a process perspective, there is a need to assure all stakeholders that the entire journey of creating the AI system was ethically sound, unbiased, and instilled with measures ensuring trust and safety. This includes not just designing with these values in mind but also ensuring auditability at every stage.

Secondly, when it comes to the outcome or product of AI, there’s the obligation to make sure that any decision made by the AI system is elucidated in ways that are understandable to non-experts. The explanations shouldn’t merely regurgitate the mathematical or technical jargon but should be phrased in relatable terms, reflecting societal contexts. Furthermore, the results or behaviors of the AI should be defensible, fitting within parameters of fairness, trustworthiness, and ethical appropriateness.

In addition to these tasks, there’s a broader need for professional and institutional transparency. Every individual involved in the AI’s development and deployment should adhere to stringent standards that emphasise values like integrity, honesty, and neutrality. Their primary allegiance should be to the public’s best interests, superseding other considerations.

Moreover, throughout the AI development process, there should be an open channel for public oversight. Of course, certain information may need to remain confidential for valid reasons, like ensuring bad actors can’t exploit the system. But, by and large, the emphasis should be on openness.

Transitioning into the structural aspects of AI development, a Process-Based Governance (PBG) Framework emerges as a crucial tool. Such a framework is pivotal for integrating ethical considerations and best practices seamlessly into the actual development process. The guide might delve into specifics like the CRISP-DM, but it’s worth noting that the principles of responsible AI development can be incorporated into other workflow models, including KDD and SEMMA. Adopting such a framework helps ensure that the values underpinning ethical AI are not just theoretical but find active expression in every phase of the AI’s life cycle.

Alan Turing’s simple sketch in 1936 was nothing short of revolutionary. With just a linear tape, symbols, and a set of rules, he demystified the very essence of calculations, giving birth to the conceptual foundation of the modern computer. His Turing machine wasn’t just a solution to the enigma of effective calculations, it was the conceptual forerunner of the digital revolution we live in today. This innovative leap, stemming from a quiet room at Kings College, Cambridge, is foundational to our digital landscape.

Fast forward to our present day, and we find ourselves immersed in a world where the lines between the physical and digital blur. The seamless interplay of connected devices, sophisticated algorithms, and vast cloud computing platforms is redefining our very existence. Technologies like the Internet of Things and edge computing are not just changing the way we live and work; they’re reshaping the very fabric of our society. AI is becoming more than just a tool or a technology; it is rapidly emerging as the fulcrum upon which our future balances. The possibilities it presents, both optimistic and cautionary, are monumental. It’s essential to realise that the trajectory of AI’s impact lies in our hands. The decisions we make today will shape the society of tomorrow, and the implications of these choices weigh heavily on our collective conscience.

It’s paramount to see that artificial intelligence isn’t just about codes and algorithms. It’s about humanity, our aspirations, our values, and our shared vision for the future. In many ways, the guide on AI ethics and safety serves as a compass, echoing Turing’s ethos by emphasising that the realm of AI, at its core, remains a profoundly human domain. Every line of code, every algorithmic model, every deployment carries with it a piece of human intention, purpose, and responsibility.

In essence, understanding the ethics and safety of AI isn’t just about mitigating risks or optimising outputs. It’s about introspection and realising that behind every technological advancement lie human choices. Responsible innovation isn’t just a catchphrase; it’s a call to action. Only by staying grounded in our shared ethical values and purpose-driven intentions can we truly harness AI’s potential. Let’s not just be passive recipients of technology’s gifts. Instead, let’s actively shape its direction, ensuring that our collective digital future resonates with our shared vision of humanity’s greatest aspirations.

Links

https://www.turing.ac.uk/news/publications/understanding-artificial-intelligence-ethics-and-safety

https://www.turing.ac.uk/sites/default/files/2019-06/understanding_artificial_intelligence_ethics_and_safety.pdf