The Architecture of Data Processing in Computers

First published 2023

The heart of any computing system is its ability to process and manipulate data. Over the decades, while the speed and efficiency of computers have evolved dramatically, the core concept of data processing remains consistent. To understand this intricate dance between software and hardware, we must delve into the depths of a computer’s structure and operations. The subsequent discussion provides a glimpse into the inner workings of a computer, highlighting how program instructions and data work together, facilitated by the computer’s hardware components.

When one contemplates a computer, it is easy to overlook the complexity and brilliance behind every click, every opened application, and every typed letter. At the foundation of these operations are program instructions, encoded in machine code, and the data, often represented in binary format. This binary language, a series of 0s and 1s, is the lingua franca of computers and forms the basis for all its operations.

A prime example of this data processing is the interplay between a computer’s memory and its processor. Think of the computer’s memory, or RAM, as a bustling library. This library holds countless books (program instructions and data) that are swiftly brought out, read, and put back. But how does one find the right book amidst such vastness? Enter the processor, the librarian, adept at decoding and executing instructions. To ensure this librarian can find the right book efficiently, a catalogue system is vital. This is where the concept of “addressability” comes into play. Just as every book in a library has a unique code, every bit of data or instruction in the computer’s memory has a distinct address. So, when a processor needs to fetch a specific instruction or data, it uses this address to pinpoint its location.

Yet, the magic doesn’t end there. If you’ve ever looked at a computer’s motherboard, you might have noticed intricate patterns of small wires, reminiscent of the veins in a leaf. These are buses, the vital connectors between the memory and the processor. Think of them as the lanes in a superhighway, ensuring a smooth flow of traffic (data and instructions) between cities (memory and processor). There are specific buses like the address, data, and control buses, each playing a unique role. For instance, in a modern computer, a typical memory location might store up to 64 bits. This means a massive highway network to manage the flow of billions of bits.

When one delves deeper into the processor, it becomes evident that it’s not just a singular entity but a composite of several components. The ALU, for instance, is the part of the processor that’s responsible for executing arithmetic and logic operations. An illustrative example can be visualising the ALU as a mathematician making complex calculations and critical decisions. Meanwhile, registers serve as temporary storage spaces, akin to a student using scratch paper during an exam. The Control Unit ensures everything functions in synchrony. Imagine an orchestra, with various instruments playing different tunes; the control unit is the conductor, ensuring every note is played in perfect harmony.

In essence, a computer, no matter how advanced, still relies on the foundational principles set in the early days of computing. From the motherboard to the processor, every component works seamlessly, translating binary data into the myriad tasks we witness daily. It’s a testament to the marvel of engineering and the limitless boundaries of human innovation.

Links

https://www.spiceworks.com/tech/tech-general/articles/what-is-computer-architecture/#:~:text=Computer%20architecture%20refers%20to%20the,to%20the%20actual%20technical%20implementation.

The Nature and Importance of Data Representation in Computers

First published 2023

The evolution and functionality of computers have always revolved around their innate ability to process and manage data. Deriving its name from the word “compute”, which means to calculate or work out, a computer is essentially a device that performs calculations. These calculations encompass a wide range of tasks such as arithmetic operations, sorting lists, or even something as intricate as determining the movement of a character in a game. To put it succinctly, a dictionary defines a computer as “an electronic device for storing and processing data, according to instructions given to it in a program.”

At the core of this machine’s function is its reliance on human-driven instructions. Without the blueprint provided by a human, in the form of programming, a computer remains dormant, unable to execute any task. It is only when these instructions are fed to the system that the computer transforms into a tool capable of astounding tasks. These tasks are carried out at a pace and accuracy unmatched by human capabilities, as demonstrated by the speed and precision in the video “Fujitsu Motherboard Production”, showcasing a computer-controlled system in action. This ability to process vast amounts of information at rapid speeds without errors is one of the many advantages computers have over humans.

Yet, it would be misguided to consider computers infallible or equivalent to human cognition. While they may possess powerful processors often referred to as “brains”, they do not possess the capability to think autonomously. Unlike humans, they are void of emotions, common sense, or the ability to ponder over problems using abstract thought. Hence, despite their strengths, there are countless everyday tasks that remain beyond their realm of capabilities.

In the vast world of computing, ‘data’ is the quintessence. Whether it’s numbers, text, graphics, sound, or video, it’s all data. However, to be understood and processed by a computer, this data needs to be translated into a format the computer can decode. This is where the concept of binary comes into play. Computers fundamentally operate by rapidly toggling circuits, or transistors, between two states – on and off. This two-state operation is symbolized by the numbers 1 and 0, respectively. These numbers, known as Binary digITS or BITS, are the foundational blocks of data representation in computers.

Software, data, and almost every operational aspect of a computer is encapsulated using these BITS. Recollecting history, early computers such as the 1975 Altair 8800 processed data in groups of 8 bits, which is termed as a byte. Bytes then became the primary metric for memory and storage. In today’s age, computers handle data ranging from Megabytes (MB) to Gigabytes (GB), and even more monumental units. By 2015, a staggering 8 Zettabytes of data was estimated to be stored across global computer systems.

While these numbers sound vast and incomprehensible, it is crucial to use appropriate units when conveying such information. Expressing data sizes in suitable units ensures clarity and comprehensibility. For instance, stating that a high-definition movie occupies 1,717,986,918 bytes is cumbersome and difficult to grasp. Instead, it’s far more comprehensible to express it as 1.6 Gb. Thus, correct representation not only simplifies understanding but ensures that information is delivered meaningfully and efficiently.

In conclusion, data representation lies at the heart of computing. From the earliest computers to today’s advanced systems, the binary system of 1s and 0s has remained the fundamental language of machines. This intricate mechanism allows computers to process vast amounts of information swiftly and accurately, solidifying their role as invaluable tools in today’s digital age. However, as we marvel at their capabilities, it is essential to remember that they still operate within the confines of human programming and lack the nuances of human thought and emotion. As the world continues to generate colossal amounts of data, the importance of understanding and efficiently conveying data units only grows. It serves as a testament to the symbiotic relationship between man and machine, where both are indispensable to the other’s success and progress.

Links

home.adelphi.edu/~siegfried/cs170/170l1.pdf

Ethical and Practical Dimensions of Big Data in the NHS

First published 2022

Understanding the concept of Big Data in the field of medicine is relatively straightforward: it involves using extensive volumes of medical information to uncover trends or correlations that may not be discernible in smaller datasets. However, one might wonder why Big Data hasn’t been more widely applied in this context in the NHS. What sets industries like Google, Netflix, and Amazon apart, enabling them to effectively harness Big Data for providing precise and personalised real-time information based on online search and purchasing activities, compared to the National Health Service?

An examination of these thriving industries reveals a key distinction: they have access to data that is freely and openly provided by customers and is delivered directly and centrally to the respective companies. This wealth of detailed data encompasses individual preferences and aversions, facilitating accurate predictions for future online interactions.

Could it be feasible to use extensive volumes of medical data, derived from individual patient records, to uncover new risks or therapeutic possibilities that can then be applied on a personalised level to enhance patient outcomes? When we compare the healthcare industry to other sectors, the situation is notably distinct. In healthcare, medical records, which contain highly sensitive personal information, are carefully protected and not openly accessible. Typically, data remains isolated within clinic or hospital records, lacking a centralised system for sharing that would enable the rapidity and scale of data necessary to fully harness Big Data techniques. Medical data is also intricate and less readily “usable” in comparison to the data provided to major corporations, often requiring processing to render it into a readily usable format. Additionally, the technical infrastructure required for the movement, manipulation, and management of medical data is not readily accessible.

In a general sense, significant obstacles exist in terms of accessing data, and these obstacles encompass both philosophical and practical dimensions. To enhance the transformation of existing data into novel healthcare solutions, several aspects must be tackled. These encompass, among other things, the gathering and standardisation of diverse datasets, the careful curation of the resulting refined data, securing prior informed consent for the use of de-identified data, and the capacity to offer these datasets for further use by the healthcare and research communities.

To gain a deeper understanding of the opportunities within the clinical field and why the adoption and adaptation of these techniques haven’t been a straightforward transfer from other industries, it’s beneficial to examine both the similarities and distinctions between clinical Big Data and data used in other sectors. Industries typically work with what can truly be labeled as Big Data, characterised by substantial volume, rapid velocity, and diversity, but often exhibit low information density. These data are frequently freely obtained, stemming from an individual’s incidental digital activities in exchange for services, serving as a proxy indicator for specific behaviors that enable the anticipation of patterns, trends, and outcomes. Essentially, such data are acquired at the moment services are accessed, and they either exist or do not exist.

Comparable data can be found in clinical settings as well. For instance, during surgery, there is the continuous monitoring of physiological parameters through multiple devices, generating substantial volume, high velocity, and diverse data that necessitate real-time processing to identify data falling outside predefined thresholds, prompting immediate intervention by attending clinicians. On the other hand, there are instances of lower-volume data, such as the day-to-day accumulation of clinical test results, which contribute to updated diagnoses and medical management. Likewise, the analysis of population-based clinical data has the capability to forecast trends in public health, like predicting the timing of infectious disease outbreaks. In this context, velocity offers “real-time” prospective insights and allows for trend forecasting. The origin of the data is attributable to its source, whether it be a patient in the operating room or a specific geographical population experiencing the winter flu season.

The primary use of this real-time information is to forecast future trends through predictive modeling, without attempting to provide explanations for the findings. However, a more immediate focus of Big Data is the extensive clinical data already stored in hospitals, aiming to address the question of why specific events are occurring. These data have the potential, provided they can be effectively integrated and analysed, to offer insights into the causes of diseases, enable their detection and diagnosis, guide treatment and management, and facilitate the development of future drugs and interventions.

To assimilate this data, substantial computing power well beyond what an individual can manage is required, thus fitting the definition of Big Data. The data will largely be population-specific and then applied to individuals (e.g., examining patient groups with different disease types or processes to gain new insights for individual benefit). Importantly, this data will be collected retrospectively, rather than being acquired prospectively.

Lastly, while non-medical Big Data has often been incidental, freely available, and of low information density, clinical Big Data will be intentionally gathered, incurring costs (borne by someone), and characterised by high information density. This is more akin to business intelligence, where Big Data techniques are needed to derive measurements and detect trends (not just predict them) that would otherwise remain concealed or beyond human inspection alone.

Patient data, regardless of its nature, often seems to be associated with the medical institutions that hold it. However, it’s essential to recognise that these institutions function as custodians of the data; the data itself belongs to the patients. Access to and use of this data beyond clinical purposes necessitate the consent of the patients. This immediately poses a challenge when it comes to the rapid use of the extensive data already contained in clinical records.

While retrospective, hypothesis-driven research can be conducted on specific anonymised data, as is common in research, it’s important to note that once a study concludes, the data should ideally be deleted. This approach contradicts the principles of advancing medical knowledge, particularly when employing Big Data techniques that involve thousands to millions of data points requiring significant processing. Losing such valuable data at the conclusion of a project is counterproductive.

Prospective patient consent to store and use their data offers a more robust model, enabling the accumulation of substantial datasets that can be subsequently subjected to hypothesis-driven research questions. Although foregoing the use of existing retrospective data may appear wasteful, the speed (velocity) at which new data are generated in the NHS makes consented data far more valuable. Acquiring patient consent, however, often necessitates on-site personnel to engage with patients. Alternatively, options like patients granting blanket consent for data usage may be viable, provided that such consent is fully informed.

This dilemma has come to the forefront due to the implementation of the EU General Data Protection Regulation (GDPR) in 2018, triggering an international discourse on the sharing of Big Data in healthcare. In 2021, the UK government commissioned the ‘Goldacre Review’ into how to create big data sets, and how to ensure the “efficient and safe use of health data for research and analysis can benefit patients and the healthcare sector”. The review concluded that it is essential to invest in safe and trusted platforms for data and high-quality data curation to allow researchers and AI creators to realise the potential of the data. This data “represents deeply buried treasure, that can help prevent suffering and death, around the planet, on a biblical scale.”

Following the Goldacre Review, the UK government launched the ‘National Data Strategy’, which supports the creation of high-quality big data, and ‘Data Saves Lives’, which specifically sets out to “make better use of healthcare data and to save lives”. The ‘Data Saves Lives’ initiative exemplifies the progressive approach the UK has taken towards harnessing the power of Big Data in healthcare. Recognising the transformative potential of large-scale medical datasets, the initiative seeks to responsibly leverage patient data to drive innovations in medical research and clinical care. There’s a recognition that while industries like Netflix or Amazon can instantly access and analyse user data, healthcare systems globally, including the NHS, must manoeuvre through more complex ethical, legal, and logistical terrains. Patient data is not just another statistic; it is a deeply personal narrative that holds the key to both individual and public health solutions. Ensuring its privacy, obtaining informed consent, and simultaneously making it available for meaningful research is a balancing act, one that the NHS is learning to master.

In conclusion, the use of Big Data in the realm of healthcare differs significantly from its application in other industries, primarily due to the sensitive nature of the data and the ethical implications of its use. The potential benefits of harnessing this data are immense, from individualised treatments to large-scale public health interventions. Yet, the complexities surrounding its access and use necessitate a thoughtful, patient-centric approach. Initiatives like ‘Data Saves Lives’ signify the healthcare sector’s commitment to unlocking the potential of Big Data, while ensuring patients remain at the heart of the conversation. As the NHS and other global healthcare entities navigate this promising frontier, the underlying ethos must always prioritise patient welfare, trust, and transparency.

Links

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6502603/

https://www.gov.uk/government/publications/better-broader-safer-using-health-data-for-research-and-analysis

https://www.gov.uk/government/publications/uk-national-data-strategy/national-data-strategy

https://digital.nhs.uk/services/national-data-opt-out/understanding-the-national-data-opt-out/confidential-patient-information

Big Data: Two Sides of An Argument

First published 2022

In this contemporary era, technology is continually advancing, leading to the accumulation of personal data in the form of numerous digits and figures. The term ‘Big Data’ refers to data that contains greater variety, arriving in increasing volumes and with more velocity, which is often referred to as the three Vs. In simpler terms, big data encompasses extensive and intricate datasets, particularly those stemming from novel data sources. These datasets are of such immense volume that conventional data processing software is inadequate for handling them. Nevertheless, these vast pools of data can be harnessed to solve business challenges that were previously insurmountable. For example, ‘Big Data’ is required for AI to work properly. For the AI algorithms to correctly recognise and ‘Intelligently’ understand patterns and correlations, they need access to a huge amount of data. It is important that this big data has the correct ‘Volume, Velocity, and Variety’ (3 Vs).

Many individuals are concerned about safeguarding this intangible yet highly valuable aspect of their lives. Given the profound importance people place on their privacy, numerous inquiries emerge regarding the ultimate custodians of this information. What if it came to light that corporations were exploiting loopholes in data privacy regulations to further their own financial gains? Two articles examine the concept of exposing private information: “Private License Plate Scanners Amassing Vast Databases Open to Highest Bidders” (RT, 2014) and “Who Has The Right to Track You?” (David Sirota, 2014). While unveiling how specific businesses profit from the scanning of license plates and the collection of individuals’ personal data, both authors effectively employ a range of persuasive techniques to sway their readers.

Pathos serves as a rhetorical device that aims to evoke emotional responses from the audience. In the second article, titled “Who Has The Right to Track You?”, David Sirota adeptly employs pathos to establish a strong emotional connection with his readers. Starting with the article’s title and continuing with questions like, “Do corporations have a legal right to track your car?”, he deliberately strikes a chord of apprehension within the reader. Sirota uses phrases such as “mass surveillance” and “mass photography,” repeatedly emphasising the accumulation of “millions of license plate images” to instill a sense of insecurity in the reader.

Throughout the article, he maintains a tone of genuine concern and guardianship on his part, often addressing the reader in the second person and assuring them that he is an advocate for “individuals’ privacy rights.” This approach enables him to forge a connection with the audience, making them feel as though he is actively looking out for their well-being.

The author of the second article, RT, employs pathos to engage with readers from a contrasting standpoint. He employs phrases such as “inhibiting scanners would…create a safe haven…for criminals” and “reduce the safety of our officers, and it could ultimately result in lives lost”. These statements are crafted to instill fear in the audience, persuading them to consider the necessity of sacrificing their privacy for the sake of law enforcement’s ability to safeguard them. RT avoids using the term “mass surveillance” and instead employs more lighthearted expressions like “the scanners ‘scoop up 1,800 plates a minute'”. By using this less threatening language, such as “scoop up,” the author intends to alleviate any concerns readers may have about this practice, portraying it in a more benign light.

Both authors employ the rhetorical device of logos, which involves using logic and reason to persuade their respective audiences. Sirota, for instance, provides data such as the cameras in question “capturing data on over 50 million vehicles each month” and their widespread use in major metropolitan areas. This substantial data serves to evoke discomfort in the reader and cultivate a fundamental distrust in these surveillance systems. Sirota further invokes reason by highlighting that valuable information like “household income” is being collected to enable companies to target consumers more effectively. Through this logical approach, he underscores the ethical concerns regarding how companies disseminate such information to willing clients.

In contrast, RT employs logos to assuage the reader’s concerns about data collection. He emphasises that the primary users of this data collection are “major banks, tracking those defaulting on loans,” and the police, who use it to apprehend criminals. Essentially, RT is conveying to the reader that as long as they are not engaged in wrongdoing, there should be no cause for alarm. Moreover, he reassures the reader that illicit use of scanning procedures is an uncommon occurrence, citing an environment owner who states, “If we saw scanning like this being done, we would throw them out”. This logical argument is designed to ease the reader’s anxieties about the potential misuse of data collection systems.

Both authors employ ethos in their persuasive efforts, with Sirota demonstrating a stronger use of this rhetorical appeal. One factor contributing to the weakness of the first article is the credibility of its sources. In the RT article, the sources often appear to originate from heavily biased sources, such as the large corporations themselves. For instance, the person quoted as stating, “I fear that the proposed legislation would essentially create a safe haven in the Commonwealth for certain types of criminals, it would reduce the safety of our officers, and it could ultimately result in lives lost,” is not a law enforcement officer, attorney, or legislator; rather, it is Brian Shockley, the vice president of marketing at Vigilant, the corporate parent of Digital Recognition. It is problematic for the reader to be frightened into relinquishing their privacy by a corporation that stands to profit from it.

In contrast, Sirota cites sources with high credibility, or extrinsic ethos, throughout his article. He quotes ACLU attorney Catherine Crump, who states: “‘One could argue that the privacy implications of a private individual taking a picture of a public place are significantly different from a company collecting millions of license plate images…there may be a justification for regulation.” Sirota presents a relatable source representing the public’s interests from a legal perspective, rather than one aligned with a corporation seeking to gain from the situation.

The balance between corporate and national security interests on one hand, and individual privacy and rights on the other, continues to be a significant subject in our increasingly tech-driven society. The authors of the articles examined in this discussion skillfully employed ethos, pathos, and logos to build their cases regarding the use of private license plate scanners. Numerous journalists and news outlets have also contributed their perspectives on this matter, aiming to educate the public about both sides of the argument. While journalists and writers may present a particular viewpoint, it ultimately falls upon the reader to carefully contemplate all the ramifications of the debate.

Links

https://h2o.ai/wiki/big-data/

https://www.rt.com/usa/license-scanners-private-database-046/

https://inthesetimes.com/article/do-companies-have-a-right-to-track-you

Digital Health: Improving or Disrupting Healthcare?

First published 2022; revised 2023

In recent years, the integration of digital technology into the healthcare sector has led to a transformative shift in how medical care is delivered and managed. This phenomenon, often referred to as “digital health,” encompasses a wide range of technological advancements, including electronic health records, telemedicine, wearable devices, health apps, and artificial intelligence. As the healthcare industry grapples with the complexities of this digital revolution, a pressing question emerges: is digital health primarily improving or disrupting care? This leads to questions about the multifaceted impact of digital health on healthcare systems, patients, and professionals, ultimately suggesting that while challenges exist, the potential benefits of digital health far outweigh its disruptive aspects.

One of the most significant advantages of digital health is its potential to improve the quality and accessibility of care. Electronic health records (EHRs) have streamlined the process of storing and sharing patient information among healthcare providers, reducing the chances of errors and ensuring more coordinated care. This enhanced communication promotes patient safety and can lead to better health outcomes.

Telemedicine, a subset of digital health, has revolutionised the way healthcare is delivered. It enables remote consultations, making medical expertise accessible to individuals who may have previously faced geographical or logistical barriers to care. This is especially crucial in rural or underserved areas, where access to specialised medical services might be limited.

Wearable devices and health apps empower patients to monitor their health in real time, providing valuable insights that can promote preventive care and early intervention. For instance, individuals with chronic conditions like diabetes can track their blood sugar levels and receive alerts when they deviate from the normal range. This not only keeps patients informed about their health but also enables healthcare providers to tailor treatment plans more effectively.

Artificial intelligence (AI) is another area where digital health is making substantial strides. Machine learning algorithms can analyse large datasets to identify patterns and predict disease outbreaks, thereby improving public health surveillance. Additionally, AI-powered diagnostic tools assist healthcare professionals in interpreting medical images with higher accuracy, aiding in early disease detection.

While the potential benefits of digital health are undeniable there are also challenges and disruptions that need to be addressed. Privacy and security concerns are prominent issues as the collection and storage of vast amounts of personal health data raise the risk of unauthorised access and breaches. Ensuring robust cybersecurity measures is imperative to protect patients’ sensitive information.

Another concern is the potential for a digital divide, where certain populations, especially older adults or those with limited technological literacy, may be left behind due to difficulties in adopting and using digital health tools. This could exacerbate healthcare disparities rather than alleviate them.

Furthermore, the rapid pace of technological innovation can sometimes outpace the ability of regulatory frameworks to keep up. This can lead to issues related to the quality, safety, and accuracy of digital health technologies. Clear guidelines and standards need to be established to ensure that digital health solutions are evidence-based and reliable.

In conclusion, the integration of digital health into the healthcare sector represents a transformative shift that is both improving and disrupting care. While challenges such as privacy concerns and the potential for a digital divide exist, the potential benefits of digital health are profound. From enhancing communication through EHRs and telemedicine to empowering patients with real-time health monitoring and leveraging AI for diagnostics, digital health has the potential to revolutionise healthcare delivery and improve patient outcomes. To maximise its benefits, it is essential for stakeholders to collaborate in addressing challenges, implementing robust cybersecurity measures, and establishing clear regulatory guidelines. With a balanced approach, digital health can ultimately lead to a more efficient, accessible, and patient-centered healthcare system.

Links

https://www.fiercehealthcare.com/digital-health/digital-health-funding-settles-down-2023-fewer-deals-smaller-check-sizes

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2710605/

https://galendata.com/disadvantages-of-technology-in-healthcare/

https://www.dko-law.com/blog/doctors-too-dependent-on-medical-technology/

Turing’s Vision: Navigating the Landscape of Ethical and Safe AI

First published 2022; revised 2023

In the dawn of the artificial intelligence era, there is an imperative need to navigate the complexities of AI ethics and safety. Ensuring that AI systems are both safe and ethically sound is no longer just a theoretical concern but a pressing practical issue that affects the global threads of industry, governance, and society at large. Drawing insights from Leslie, D. (2019) in “Understanding artificial intelligence ethics and safety: A guide for the responsible design and implementation of AI systems in the public sector”, published by The Alan Turing Institute, this essay explores the varied dimensions of AI’s responsible design and implementation.

The Alan Turing Institute forges its position as an aspirational, world-leading hub that examines the technical intricacies that underpin safe, ethical, and trustworthy AI. Committed to fostering responsible innovation and pioneering research breakthroughs, the Institute aims to go beyond mere theoretical discourses. It envisions a future where AI not only advances in capabilities but also upholds the core values of transparency, fairness, robustness, and human-centered design. Such an ambition necessitates a commitment to advancing AI transparency, ensuring the fairness of algorithmic systems, forging robust systems resilient against external threats, and cultivating AI-human collaborations that maintain human control.

However, the quest to realise this vision is not an isolated endeavour. It requires broad, interdisciplinary collaborations, connecting the dots between technical experts, industry leaders, policy architects, and the public. Aligning with the UK government’s Industrial Strategy and meeting the burgeoning global demand for informed guidance in AI ethics, the Institute’s strategy serves as a blueprint for those committed to the responsible growth of AI. However, it’s essential to remember that the responsible evolution of AI is not just about mastering the technology but understanding its implications for the broader context of our society.

The dawn of the information age has been marked by an extraordinary convergence of factors: the expansive availability of big data, the unparalleled speed and reach of cloud computing platforms, and the maturation of intricate machine learning algorithms. This synergy has propelled us into an era of unmatched human potential, characterised by a digitally interwoven world where the power of AI stands as a beacon of societal improvement.

Already, we witness the profound impact of AI across various sectors. Essential social domains such as healthcare, education, transportation, food supply, energy, and environmental management have all been beneficiaries of AI-driven innovations. These accomplishments, however significant they may appear now, are perhaps only the tip of the iceberg. AI’s very nature, its inherent capability to evolve and refine itself with increased access to data and surging computing power, guarantees its continuous ascent in efficacy and utility. As we navigate further into the information age, it’s conceivable that AI will soon stand at the forefront, guiding the progression of critical public interests and shaping the contours of sustainable human development.

Such a vision, where AI aids humanity in addressing its most pressing challenges, is undeniably exhilarating. Yet, like any frontier technology that’s rapidly evolving, AI’s journey is fraught with pitfalls. A steep learning trajectory ensures that errors, misjudgments, and unintended consequences are not just possible but inevitable. AI, despite its immense promise, is not immune to these challenges.

Addressing these challenges is not a mere recommendation but a necessity. It is imperative to prioritise AI ethics and safety to ensure its responsible evolution and to maximise its public benefit. This means an in-depth integration of social and ethical considerations into every facet of AI deployment. It calls for a harmonised effort, requiring data scientists, product managers, data engineers, domain experts, and delivery managers to work in unison. Their collective goal? To align AI’s development with ethical values and principles that not only prevent harm but actively enhance the well-being of communities that come under its influence.

The emergence of the field of AI ethics is a testament to this necessity. Born out of a growing recognition of the potential individual and societal harms stemming from AI’s misuse, poor design, or unforeseen repercussions, AI ethics seeks to provide a compass by which we navigate the AI-driven future responsibly.

Understanding the evolution of AI and its implications requires us to first recognise the genesis of AI ethics. The eminent cognitive scientist and AI trailblazer, Marvin Minsky, once described AI as the art of enabling computers to perform tasks that, when done by humans, necessitate intelligence. This fundamental definition highlights a crucial aspect of the discourse surrounding AI: humans, when undertaking tasks necessitating intelligence, are held to standards of reliability, accuracy, and sound reasoning. We expect them to justify their decisions, and to act with fairness, equity, and reasonableness in their interactions.

However, the rise and spread of AI technologies have reshaped this landscape. As AI systems take over myriad cognitive functions, they introduce a conundrum. Unlike humans, these algorithmic processes aren’t directly accountable for their actions, nor can they be held morally responsible for the outcomes they produce. Essentially, while AI systems exhibit a form of ‘smart agency’, they lack inherent moral responsibility, creating a discernible ethical void.

Addressing this void has become paramount, giving birth to a host of frameworks within AI ethics. One such framework is the FAST Track Principles, which stands for Fairness, Accountability, Sustainability, and Transparency. These principles are designed to bridge the gap between AI’s capabilities and its intrinsic moral void. To foster an environment conducive to responsible AI development, it is vital that every stakeholder, from data scientists to policy experts, familiarises themselves with the FAST Track Principles. These principles should guide actions and decisions throughout the AI project lifecycle, underscoring the idea that creating ethical AI is a collective endeavor.

Delving deeper into the principle of fairness, one must remember that while AI systems might project a veneer of neutrality, they are ultimately products of human design. Humans, with all their inherent biases and contextual limitations, play a pivotal role in AI’s creation. At any stage of an AI project, from data extraction to model building, the spectres of human error, prejudice, and misjudgment can introduce biases. Moreover, AI systems often derive their accuracy by analysing data that might encapsulate age-old societal biases and discriminations, further complicating the fairness equation.

Addressing fairness in AI is far from straightforward. There isn’t a singular, foolproof method to eliminate biases or ensure fairness. However, by adopting best practices that focus on fairness-aware design and implementation, there’s potential to create systems that yield just and equitable outcomes. One foundational approach to fairness is the principle of discriminatory non-harm. It mandates that AI innovations should not result in harm due to biased or discriminatory outcomes. This principle, while seemingly basic, serves as a cornerstone, directing the development and deployment of AI systems towards a more equitable and fair future.

The Principle of Discriminatory Non-Harm sets forth that AI system designers and users should be deeply committed to reducing biases and preventing discriminatory outputs, especially when dealing with social or demographic data. This implies a few specific obligations. First, AI systems should be built upon data that is representative, accurate, and generalisable, ensuring “Data Fairness.” Second, the systems’ design should not include any variables, features, or processes that are morally objectionable or unjustifiable – this is “Design Fairness.” The systems should also be crafted to avoid producing discriminatory effects on individuals or groups – ensuring “Outcome Fairness.” Lastly, the onus is on the users to be adequately trained to use AI systems responsibly, embodying “Implementation Fairness.”

When considering the concept of Accountability in AI, the best practices for data processing as mentioned in Principle 6 of the Data Ethics Framework come to mind. However, the ever-evolving AI landscape brings forward distinct challenges, especially in public sector accountability. Two major challenges emerge: the “accountability gap” and the multifaceted nature of AI production processes. Automated decisions, inherently, are not self-explanatory. Unlike human agents, statistical models and AI’s underlying infrastructure don’t bear moral responsibility, creating a void in accountability. Coupled with this is the intricate nature of AI project deliveries involving a myriad of stakeholders, making it a daunting task to pinpoint responsibility if an AI system’s implementation has adverse consequences.

To address these challenges, it’s imperative to adopt a comprehensive approach to accountability that encompasses both Answerability and Auditability. Answerability stresses that human creators and users of AI systems should take full responsibility for the algorithmically-driven decisions. They should be ready to provide clear, coherent, and non-technical explanations for these decisions, ensuring that every stage of the AI process is accountable. Auditability, on the other hand, focuses on how to hold these AI system designers and implementers accountable. It emphasises the demonstration of both responsible design and use practices, and the justifiability of the outcomes.

Another critical pillar is Sustainability. AI system designers and users must be continually attuned to the long-term and transformative effects their technologies might have on individuals and society at large. This proactive awareness ensures that the systems not only address the immediate needs but also consider the long-term societal impacts.

In tandem with sustainability is Safety. Besides considering the broader social ramifications of an AI system, it’s essential to address its technical sustainability and safety. Given that AI operates in an unpredictable environment, achieving technical safety becomes a challenging task. However, the importance of building a safe and reliable AI system cannot be overstated, especially when potential failures could result in harmful consequences and erode public trust. To achieve this, emphasis must be placed on the core technical objectives of accuracy, reliability, security, and robustness. This involves rigorous testing, consistent validation, and frequent reassessment of the system. Moreover, effective oversight mechanisms need to be integrated into the system’s real-world operation to ensure that it functions safely and as intended.

The intrinsic challenges of accuracy in artificial intelligence systems can be linked to the inherent complexities and unpredictability of the real world. When trying to model this chaotic reality, it’s a significant task to ensure that an AI system’s predictions or classifications are precise. Data noise, which is unavoidable, combined with the potential that a model might not capture all aspects of the underlying patterns and changes in data over time, can all contribute to these challenges.

On the other hand, the reliability of an AI system rests on its ability to consistently function in line with its intended design and purpose. This means that if a system is deemed reliable, users can trust that its operations will adhere to its set specifications, bolstering user confidence in the safety and predictability of its outcomes.

AI systems also face threats on the security front. Security is not just about safeguarding an AI system from potential external threats but also ensuring that the system’s architecture remains uncompromised and that any data or information within it remains confidential. This integrity is paramount, especially when considering the potential adversarial threats that AI systems might face.

Robustness in AI, meanwhile, centres on an AI system’s ability to function effectively even under less than ideal conditions. Whether these conditions arise from intentional adversarial actions, human errors, or misalignments in automated learning objectives, the system’s ability to maintain its integrity is a testament to its robustness.

One of the more nuanced challenges that machine learning models face is the phenomenon of concept drift. When the historical data, which informs the model’s understanding, becomes outdated or misaligned with current realities, the model’s accuracy and reliability can suffer. Therefore, staying attuned to changes in the underlying data distribution is vital. Ensuring that the technical team is aware of the latest research on detecting and managing concept drift will be crucial to the continued success of AI projects.

Another pressing concern in the realm of AI is adversarial attacks. These attacks cleverly manipulate input data, causing AI models to make grossly incorrect predictions or classifications. The subtle nature of these perturbations can lead to significant ramifications, especially in critical systems like medical imaging or autonomous vehicles. Recognising these vulnerabilities, there has been a surge in research in the domain of adversarial machine learning, aiming to safeguard AI systems from these subtle yet disruptive inputs.

Equally concerning is the threat of data poisoning, where the very data that trains an AI system is tampered with, causing the system to generate inaccurate or harmful outputs. This kind of attack can be especially sinister as it might incorporate ‘backdoors’ into the system, which when triggered, can cause malfunctions. Therefore, beyond technical solutions, it becomes imperative to source data responsibly and ensure its integrity throughout the data handling process. The emphasis should be on responsible data management practices to ensure data quality throughout the system’s lifecycle.

In the world of artificial intelligence, the term “transparency” has taken on a nuanced and specialised meaning. While the everyday usage of the term typically evokes notions of clarity, openness, and straightforwardness, in AI ethics, transparency becomes even more multifaceted. One aspect of this is the capacity for AI systems to be interpretable. That is, those interacting with an AI system should be able to decipher how and why the system made a particular decision or acted in a certain way. This kind of transparency is about shedding light on the internal workings of the often enigmatic AI mechanisms, allowing for greater understanding and trust.

Furthermore, transparency isn’t limited to merely understanding the “how” and “why” of AI decisions. It also encompasses the ethical considerations behind both the design and deployment of AI systems. When AI systems are said to be transparent, it implies that they can be justified as ethical, unbiased, trustworthy, and safety-oriented both in their creation and their outcomes. This dual focus on process and product is vital.

In developing AI, teams are tasked with several responsibilities to ensure this two-tiered transparency. First, from a process perspective, there is a need to assure all stakeholders that the entire journey of creating the AI system was ethically sound, unbiased, and instilled with measures ensuring trust and safety. This includes not just designing with these values in mind but also ensuring auditability at every stage.

Secondly, when it comes to the outcome or product of AI, there’s the obligation to make sure that any decision made by the AI system is elucidated in ways that are understandable to non-experts. The explanations shouldn’t merely regurgitate the mathematical or technical jargon but should be phrased in relatable terms, reflecting societal contexts. Furthermore, the results or behaviors of the AI should be defensible, fitting within parameters of fairness, trustworthiness, and ethical appropriateness.

In addition to these tasks, there’s a broader need for professional and institutional transparency. Every individual involved in the AI’s development and deployment should adhere to stringent standards that emphasise values like integrity, honesty, and neutrality. Their primary allegiance should be to the public’s best interests, superseding other considerations.

Moreover, throughout the AI development process, there should be an open channel for public oversight. Of course, certain information may need to remain confidential for valid reasons, like ensuring bad actors can’t exploit the system. But, by and large, the emphasis should be on openness.

Transitioning into the structural aspects of AI development, a Process-Based Governance (PBG) Framework emerges as a crucial tool. Such a framework is pivotal for integrating ethical considerations and best practices seamlessly into the actual development process. The guide might delve into specifics like the CRISP-DM, but it’s worth noting that the principles of responsible AI development can be incorporated into other workflow models, including KDD and SEMMA. Adopting such a framework helps ensure that the values underpinning ethical AI are not just theoretical but find active expression in every phase of the AI’s life cycle.

Alan Turing’s simple sketch in 1936 was nothing short of revolutionary. With just a linear tape, symbols, and a set of rules, he demystified the very essence of calculations, giving birth to the conceptual foundation of the modern computer. His Turing machine wasn’t just a solution to the enigma of effective calculations, it was the conceptual forerunner of the digital revolution we live in today. This innovative leap, stemming from a quiet room at Kings College, Cambridge, is foundational to our digital landscape.

Fast forward to our present day, and we find ourselves immersed in a world where the lines between the physical and digital blur. The seamless interplay of connected devices, sophisticated algorithms, and vast cloud computing platforms is redefining our very existence. Technologies like the Internet of Things and edge computing are not just changing the way we live and work; they’re reshaping the very fabric of our society. AI is becoming more than just a tool or a technology; it is rapidly emerging as the fulcrum upon which our future balances. The possibilities it presents, both optimistic and cautionary, are monumental. It’s essential to realise that the trajectory of AI’s impact lies in our hands. The decisions we make today will shape the society of tomorrow, and the implications of these choices weigh heavily on our collective conscience.

It’s paramount to see that artificial intelligence isn’t just about codes and algorithms. It’s about humanity, our aspirations, our values, and our shared vision for the future. In many ways, the guide on AI ethics and safety serves as a compass, echoing Turing’s ethos by emphasising that the realm of AI, at its core, remains a profoundly human domain. Every line of code, every algorithmic model, every deployment carries with it a piece of human intention, purpose, and responsibility.

In essence, understanding the ethics and safety of AI isn’t just about mitigating risks or optimising outputs. It’s about introspection and realising that behind every technological advancement lie human choices. Responsible innovation isn’t just a catchphrase; it’s a call to action. Only by staying grounded in our shared ethical values and purpose-driven intentions can we truly harness AI’s potential. Let’s not just be passive recipients of technology’s gifts. Instead, let’s actively shape its direction, ensuring that our collective digital future resonates with our shared vision of humanity’s greatest aspirations.

Links

https://www.turing.ac.uk/news/publications/understanding-artificial-intelligence-ethics-and-safety

https://www.turing.ac.uk/sites/default/files/2019-06/understanding_artificial_intelligence_ethics_and_safety.pdf

Blockchain and Cryptocurrencies

First published 2022

Blockchain is a method of breaking data into blocks that are linked by cryptography (codes) so the data is very secure and difficult to change. This method of encrypting data makes it very difficult to hack. It was invented in 1991 but was not used properly until 2009 when the ‘cryptocurrency’ BitCoin was made using the method. It is so safe and reliable that it is being used for lots of things, especially new online ‘cryptocurrencies’ like Bitcoin. The technology behind Blockchain is improving and it is slowly being introduced into new areas which is having a big impact on how the world works.

Blockchain has three methods of encryption that together make the data very safe and reliable. Firstly, the data is broken into blocks. Each block contains a special code called a ‘hash’. The hash is then linked to the data in the block. Change the data and the hash code changes. Each block also contains the hash of the block behind it – so they all link up. If you change one hash the chain is broken and the data does not work.

What is to stop us from making a program that can change the data in the blocks and re-write all the hash codes at the same time – thereby ‘hacking’ the data in the block? The process required to edit a hash code has deliberately been made so difficult that even a supercomputer takes about 10 minutes to re-write one code. This is called ‘Proof of Work’ and it slows down the process so much that it is almost impossible to re-write all of the hash codes at the same time.

The next step in making the Blockchain safe is to spread multiple copies of it around the Internet – this is called a ‘peer to peer’ system or ‘a distributed ledger’. If anyone makes changes to one of the Blockchains these changes are checked against the others, if they do not match the changes are refused. This means you would have to ‘hack’ al of the copies of the Blockchain at the same time, making it almost impossible to do so.

What could it be used for? New Currency: A Blockchain currency is not controlled by any bank or government which, arguably, makes it safer and more stable. Health Records: Health records need to be kept private but also shared quickly in an emergency – Blockchain could help do this. Land Titles: In developing countries keeping track of who owns what land is difficult when there are no trusted organizations to do it. Blockchain could do this which would give businesses more confidence to invest and spend money in those countries. School Records: It is expensive to get university degrees ‘authenticated’ by lawyers – Blockchain could solve this by making sure your academic record could never be hacked or changed.

To create new Blockchains very powerful computers must complete the ‘Proof of Work’ to make new hash codes for the new blocks. Some companies do this for the new ‘cryptocurrencies’ and sell on the new Blockchain they create. Because it takes a very long time and each Blockchain is limited by the algorithms to how many there can ever be – this is called ‘mining’. It takes a long time, lots of fast computers and lots and lots of electricity, which poses challenges for the environment.

It was only the advent of the Internet that made computers ‘must have’ items for every home. And indeed it is the Internet that has enabled the crypto revolution. The pace of adoption and growth in cryptocurrency has been staggering. Yet most people have yet to embrace cryptocurrency on any level, and there is also arguably something of a generational gulf. It certainly seems that crypto appeal to millennials much more than older people – a Piplsay study in May 2021 found that half of millennials own cryptocurrency; a much higher percentage than any other surveyed demographic.

Not only do many people not understand the blockchain, cryptocurrency, NFTs, and a host of related topics, they also don’t see their potential. Cryptocurrency is still often viewed as a novelty, a gimmick, a fly-by-night obscurity that couldn’t possibly exist alongside the good old dollar. However, increasingly, that view isn’t shared by the mainstream financial industry. Institutional and investment money is rapidly flowing into cryptocurrency. While banks and other major financial institutions may initially have been hostile towards cryptos, their attitude now ranges from a tacit acceptance of the concept to outright enthusiasm.

Links

https://www.investing.com/news/cryptocurrency-news/piplsay-study-says-33-of-americans-own-cryptocurrency-2519769

Use of Artificial Intelligence in the UK Police Force

First published 2022; revised 2023

Artificial Intelligence (AI) has emerged as a groundbreaking technology with immense potential to transform various sectors, including law enforcement. In the United Kingdom, the integration of AI into the police force has garnered both attention and scrutiny. Despite one report calling for Police forces in the UK to end entirely their use of predictive mapping programs and individual risk assessment programs, AI in policing is growing and shows no signs of let up; according to Deloitte, more than half of UK police forces had planned to invest in AI by 2020. The use of AI in the UK police force, however, has many benefits, challenges, and ethical considerations.

The adoption of AI technologies in the UK police force offers several tangible advantages. One of the primary benefits is enhanced predictive policing. AI algorithms can analyse vast amounts of historical crime data to identify patterns, trends, and potential crime hotspots. This predictive capability allows law enforcement agencies to allocate resources more effectively and proactively prevent criminal activities. Moreover, AI-powered facial recognition technology has been employed to aid in identifying suspects or missing persons. This technology can scan through large databases of images and match them with real-time surveillance footage, assisting officers in locating individuals quickly and efficiently.

However, the integration of AI in policing is not without its challenges. One of the major concerns is the potential for bias in AI algorithms. If the training data used to develop these algorithms is biased, the technology can inadvertently perpetuate and amplify existing biases, leading to discriminatory outcomes, particularly against minority groups. Ensuring fairness and equity in AI-driven law enforcement practices remains a significant hurdle.

Another issue is privacy infringement. The use of facial recognition technology and other surveillance methods can raise concerns about citizens’ right to privacy. Striking a balance between public safety and individual rights is crucial, as unchecked AI implementation could erode civil liberties. Ethical considerations surrounding AI implementation in the UK police force are paramount. Transparency in how AI algorithms operate and make decisions is essential to maintain public trust. Citizens have the right to understand how these technologies are used and what safeguards are in place to prevent misuse.

Additionally, accountability is crucial. While AI can aid decision-making, final judgments should remain within human control. Police officers should not blindly follow AI recommendations but rather use them as tools to support their expertise. Challenges such as bias, privacy concerns, and ethical considerations must be carefully addressed to ensure that AI is a force for positive change and does not infringe upon citizens’ rights or exacerbate societal inequalities. As the technology continues to evolve, it is imperative that the UK police force strikes a balance between harnessing AI’s capabilities and upholding fundamental principles of justice and fairness.

Links

https://committees.parliament.uk/event/18021/formal-meeting-oral-evidence-session/

https://www.nesta.org.uk/blog/making-case-ai-policing/

Data Bias in Artificial Intelligence

First published 2021

Artificial Intelligence (AI) has revolutionised various industries, offering unprecedented insights, efficiency, and accuracy. However, its potential benefits are often shadowed by the lurking issue of data bias. Data bias refers to the presence of skewed, unrepresentative, or discriminatory data in AI training sets, leading to biased outcomes, and potentially harmful consequences. In an increasingly globalised world, the impact of data bias is magnified when AI systems developed in one context are applied to different cultural, demographic, or geographical contexts. These implications of data bias in AI can lead to inaccuracies and disparate outcomes.

For AI systems to make informed decisions, they rely heavily on the data they are trained on. If this data contains inherent biases or lacks diversity, the AI can inherit and perpetuate these biases, leading to skewed outcomes. For example, an AI system is developed in the US and is shown to be very accurate on US patients. If this AI system is then used in a hospital in the UK to diagnose patients, it is possible that the AI could give a higher rate of misdiagnoses and error on the UK patients. This may occur because of the difference between the AI’s training data, and the data it is subsequently processing.

The US data is likely to have different numbers of people, of a different ethnicity, race, age, sex, socioeconomic and environmental factors, so the algorithm developed correlations using these features. When applied to the UK population, with different diversity of features, the AI system may produce less accurate results, due to algorithmic bias. This may disproportionally impact certain minority groups, due to a mismatch of representation in training data sets, versus the data set the AI system is used on.

Algorithmic bias can lead to disproportionately negative impacts on certain minority groups, primarily due to a lack of representation in training data. The US data, which primarily features US demographics, may fail to accurately capture the intricacies of the UK population. This disparity in representation can lead to poorer performance for minority groups, within the UK context. The AI’s misdiagnoses and errors might disproportionately affect these groups due to a lack of relevant training data, therefore perpetuating health disparities.

To address data bias when applying AI systems across different contexts, several strategies can be employed. One strategy is making effort to curate training data that adequately represents the target context. Collaboration with local experts and institutions can help ensure data diversity and inclusivity. A second strategy is that AI systems should be continuously monitored for biased outcomes. Regular audits and assessments can identify disparities, and allow for timely corrective measures. Adaptive algorithms should be designed to adapt to new contexts, considering local nuances and differences to enhance accuracy. Finally, AI developers should strive for transparency in algorithmic decision-making processes. This can help identify biases, and enable stakeholders to understand and address potential inaccuracies.

These examples of data bias in medical diagnosis using AI systems demonstrate the critical importance of addressing this issue of data bias in AI applications across various fields. The impact of biased training data can result in algorithmic inaccuracies and disproportionate effects on specific groups, underscoring the necessity of ethical and responsible AI development. As AI continues to advance and globalise, it is important to be aware of the potential for data biases, so that during development of AI systems, the risks can be mitigated, and monitoring can take place to ensure that AI systems do not create biased outputs.

Links

https://hbr.org/2019/10/what-do-we-do-about-the-biases-in-ai

https://www.healthcareitnews.com/news/how-ai-bias-happens-and-how-eliminate-it

Use of Artificial Intelligence in the UK NHS

First published 2021; revised 2023

The digital revolution, characterised by the rise of Artificial Intelligence (AI) and ‘Big Data’, holds transformative potential for industries worldwide. Within this context, the healthcare sector stands out as one with particularly profound implications, given the possibility of improving patient outcomes, optimising operational efficiencies, and enabling more personalised care. The United Kingdom, known for its comprehensive public health system, the NHS, is strategically positioning itself to harness this technological tide. As the UK government and its various departments recognise the significance of these innovations, they are making targeted efforts to integrate AI and data-centric solutions into the very fabric of their healthcare system. This essay explores the UK’s initiatives, investments, and long-term plans that aim to pioneer the integration of AI into healthcare on a global scale.

The UK government and the Department of Health have identified AI and ‘Big Data’ as a key strategic priority for the UK, especially for healthcare improvement. They have already invested £250 million into the formation of an ‘NHS AI Lab’ with the intention of creating an open environment for the development of medical AI to flourish. They have also significancy invested in consultation and guidance to create a roadmap to support AI use and data in healthcare. However, while this investment promises significant advancements in healthcare efficiency and personalised medicine, it also raises critical concerns about data privacy, ethical use of AI, and the potential for deepening healthcare inequalities if not managed with a comprehensive and inclusive approach.

In 2019, the ‘NHS long term plan’ was published by NHS England, setting out priorities for NHS healthcare over the next 10 years. This plan promotes AI as one type of digitally-enabled care, which will help clinicians and support patients: “It is expected that AI will be instrumental in freeing up staff time and improving efficiency of [NHS] services. AI also has the potential to improve accuracy and efficiency in diagnostic services, and administrative processes. The aim is that ‘digitally enabled care’ will go mainstream across the NHS.” This quote underlines the anticipated role of AI in transforming NHS operations, where it’s not just about adopting new technology but integrating it to enhance service delivery, streamline workflows, and ultimately lead to better patient outcomes by making healthcare more responsive and efficient.

The government has also developed a ‘National AI Strategy’ with the aim to maximise the potential of AI, and increase resilience, productivity, growth, and innovation across public and private sectors. As part of this strategy, the government tasked regulators and public bodies to set safety standards that must be met by creators of medical AI. In order to create such regulatory standards for AI, the Medicines & Healthcare products Regulatory Agency (MHRA) has recently launched a consultation on AI accountability. The National AI Strategy also provides funding for research through the National Institute for Health and Care Research (NIHR). Additionally, the Health Data Research UK (HDR-UK) program has recently been developed to create large, linked data sets to enable discoveries to improve people’s lives, and to provide evidence for the use of AI.

The UK’s initiatives, investments, and long-term plans to integrate AI into healthcare position the country as a potential global leader in this field. By developing a comprehensive National AI Strategy and actively involving regulatory bodies like the MHRA in setting safety standards, the UK is laying the groundwork for a robust, secure, and ethical AI infrastructure in healthcare. This approach not only aims to enhance the quality and efficiency of healthcare services domestically but also sets a global precedent for how AI can be responsibly integrated into healthcare systems. The commitment to funding research through NIHR and the establishment of the HDR-UK program demonstrates the UK’s dedication to harnessing data-driven insights for medical advancements. These efforts could significantly influence global health policy and practices, potentially leading to international collaborations and setting a benchmark for other countries to follow in the ethical and effective implementation of AI in healthcare.

The NHS already has one of the world’s largest databases of patient records, but it is currently not well structured. If well curated, there is the potential for this to be the largest healthcare data set in the world, and for the NHS consequently to become a pioneering world leader in the field of AI in healthcare. Moreover, the NHS is the largest employer in Europe, and has an annual budget of £190 Billion (10% of UK GPD). The size of this healthcare sector means there is huge potential to benefit from the increased productivity and efficiency that AI could enable.

Given the rapid evolution and integration of technology in healthcare, it’s clear that the UK is positioning itself at the forefront of medical AI advancement. With the NHS’s substantial financial backing and vast database, there lies an unprecedented opportunity to harness AI capabilities, which can revolutionise patient care, streamline administrative processes, and significantly reduce healthcare costs. If these technological strides are complemented with comprehensive training for medical professionals and a robust framework that ensures patient data security, the benefits will be multi-fold. Ensuring that the AI tools are ethically developed and inclusively designed can also mitigate potential biases and ensure that the healthcare system remains equitable for all.

In conclusion, the UK government’s commitment to integrating AI and ‘Big Data’ into its healthcare system is not just an ambitious vision but a critical step towards redefining the future of global healthcare. With proper governance, collaboration, and innovation, the intersection of AI and healthcare in the UK can set a precedent for other countries to follow, ensuring better patient outcomes and efficient healthcare systems globally.

Links

https://www.who.int/news/item/16-05-2023-who-calls-for-safe-and-ethical-ai-for-health

https://www.gov.uk/government/publications/software-and-ai-as-a-medical-device-change-programme/software-and-ai-as-a-medical-device-change-programme-roadmap

https://transform.england.nhs.uk/ai-lab/ai-lab-programmes/the-national-strategy-for-ai-in-health-and-social-care

https://www.longtermplan.nhs.uk/wp-content/uploads/2019/08/nhs-long-term-plan-version-1.2.pdf

Artificial Intelligence in Dentistry

First published 2021; revised 2023

Artificial intelligence (AI) has significantly expanded its role and importance across various sectors, including dentistry, where it can replicate human intelligence to perform intricate predictive and decision-making tasks within the healthcare field, especially in the context of endodontics. AI models, such as convolutional neural networks and artificial neural networks, have displayed a wide array of applications in endodontics. These applications encompass tasks like analysing root canal system anatomy, predicting the viability of dental pulp stem cells, determining working lengths, detecting root fractures and periapical lesions, and forecasting the success of retreatment procedures.

The potential future applications of AI in this domain have also been explored, encompassing areas such as appointment scheduling, patient care enhancement, drug-drug interaction assessments, prognostic diagnostics, and robotic-assisted endodontic surgery. In the realm of disease detection, assessment, and prognosis, AI has demonstrated impressive levels of accuracy and precision. It has the capacity to contribute to the advancement of endodontic diagnosis and therapy, ultimately improving treatment outcomes.

AI operates through two distinct phases: the “training” phase and the “testing” phase. During the training phase, the model’s parameters are established using training data. Subsequently, the model uses data from previous instances, which could include patient data or data from diverse example datasets. These parameters are then applied to the test datasets.

In the past, artificial intelligence models were often described as “black boxes” because they produced outputs without providing any explanation regarding the rationale behind their decisions. However, contemporary AI functions differently. When given an input, such as an image, it generates a “heatmap” and offers a prediction, like identifying the image as a “cat.” This generated heatmap visually represents the input variables, like the pixels, that influenced the prediction. Consequently, this allows for a clear distinction between dependable and pertinent prediction techniques. For instance, it enables the categorisation of cat photos by focusing on features like the cat’s ears and nose, making the process both safer and more relevant.

Thanks to the rapid advancement of three key pillars of contemporary AI technology—namely, the proliferation of big data from digital devices, increased computational capacity, and the refinement of AI algorithms—AI applications have gained traction in enhancing the convenience of people’s lives. In the field of dentistry, AI has found its place across all dental specialties, including operative dentistry, periodontics, orthodontics, oral and maxillofacial surgery, and prosthodontics.

The majority of AI applications in dentistry have been channeled into diagnostic tasks that rely on radiographic or optical images. Other tasks have not seen as much applicability beyond image-based functions, primarily due to challenges related to data availability, data consistency, and the computational capabilities required for handling three-dimensional (3D) data.

In the field of endodontics, artificial intelligence is becoming increasingly significant. Its importance is currently on the rise in both treatment planning and disease diagnosis within endodontics. AI-based networks have the capability to detect even the most subtle changes, down to the level of a single pixel, which might go unnoticed by the human eye. A few of its applications in endodontics include root canal system anatomy, root fractures, periapical lesions, dental caries, locating minor apical foramen and success of retreatment.

Evidence-based dentistry (EBD) stands as the benchmark for decision-making in the dental profession, with AI machine learning (ML) models serving as complementary tools that learn from human expertise. ML can be viewed as an additional valuable resource to aid dental professionals across various stages of clinical cases. The dental industry experiences swift development and adoption of emerging technologies, and among these, AI stands out as one of the most promising. It offers significant advantages, including high accuracy and efficiency when trained on unbiased data with a well-optimised algorithm. Dental professionals can regard AI as an auxiliary resource to alleviate their workload and enhance the precision and accuracy of tasks related to diagnosis, decision-making, treatment planning, forecasting treatment results, and predicting disease outcomes.

Links

https://www.cbsnews.com/news/ai-artificial-intelligence-dentists-health-care/

https://instituteofdigitaldentistry.com/news/the-role-of-ai-in-dentistry/

https://adanews.ada.org/ada-news/2023/june/artificial-intelligence-and-dentistry/

https://www.frontiersin.org/articles/10.3389/fdmed.2023.1085251/full

https://head-face-med.biomedcentral.com/articles/10.1186/s13005-023-00368-z

Artificial Intelligence in Healthcare

First published 2021; revised 2023

Artificial Intelligence (AI) is a broad term with numerous definitions that encompasses an entire well-established field of computer science. Within the context of healthcare and medicine, perhaps the best way of defining AI is with one of the oldest and most basic definitions, as positied by Marvin Minsky, co-founder of MIT’s AI laboratory: “the science of making machines do things that would require intelligence if done by people”. Alternatively, the National Institute for Health and Care Research (NIHR) defines an intelligent system as “one which takes the best possible action in a given situation”. Further, there is a wide range of terminology used to describe this work, such as Deep Learning, Machine Learning, Data Insights, and AI algorithms.

There are two main types of AI: ‘Generative AI’ (such as Chat GPT), which creates new content based on learned patterns; and ‘Predictive AI’, which makes predictions on future events based on analysis of ‘Big Data’. Generally, the AI used in medicine and healthcare is predictive AI.

AI can be categorised into three bands of complexity: High, Middle, and Low. Low complexity AI programs already have widespread use within the NHS. They are extensively used, embedded in care, and well researched. These are often reasonably basic so do not require much computing power, and most importantly, are not involved in the making of high-stake decisions. An example of the use of Low Complexity AI is the Cardiovascular Disease Risk score (QRISK®) that GPs use to calculate a 10-year cardiovascular disease (leading to heart attack or stroke) risk for all patients aged between 25 and 84 and in those with type 2 diabetes. The computer system automatically populates (inputs) various aspects of the patient data such as body mass index (BMI), blood pressure, cholesterol levels, and the algorithm then creates a percentage risk score. The GP can then use national guidance to advise whether treatment is necessary.

Another example of lower complexity AI use is in speech recognition devices. These are used by doctors to increase the efficiency of ‘writing up’ patients’ notes. The doctor dictates into a device, which then recognises the speech and creates written text in patients’ digital notes. These examples of low complexity AI are well-trusted and heavily tested, but are not at the forefront of AI development and innovation.

Middle and High complexity AI systems are the current area of interest and focus and are being developed and created at pace. The results of a state of the nation survey (AHSN Network AI Report, 2018) suggested that AI will be a tool to help doctors and all healthcare professionals become more efficient and deliver a higher standard of care to patients with a greater cost-benefit. However, the use of these systems is often small scale and only in certain locations, for example as part of a pilot study, test of change, or research study. These systems are not usually used in routine clinical practice, and the cost-effectiveness and long-term effects on patients’ outcomes are often unknown. There are many examples of these new, higher-complexity AI systems. These AI systems can support the following areas of NHS care: diagnostics, treatment (for example, genomic medicine and drug development), and health service benefit.

The most important factor required for development of AI is data. Lots and lots of data. Essentially, raw data (for example: images, text files, or DNA code) is inputted into a supercomputer, which then starts to see correlations between different aspects of these data. Importantly, the computer has algorithms that provide a basis for the interpretation of this inputted data. However, this requires lots of computer power, and it is only with the recent advent of more powerful computers that the technology has become able to cope with these tasks. The AI system interprets the data to create a prediction and estimation of future events based on patterns from the data. For example, if an AI system was being built to recognise images of cars, it would first need to have an algorithm that is capable of interpreting images. The algorithm would then be fed millions of images and it would start to be able to identify cars from not-cars. It is fascinating how, as the algorithm is given more and more images, it progressively becomes better until it is able to identify images of cars to the same ability as a human. The AI system is constantly improving and changing its algorithm as this process is happening.

In healthcare, this pattern-matching potential has a clear application in diagnostics. An AI system could be trained with images of skin moles, for example, rather than images of cars. Over time, the algorithm will be able to correctly identify images of skin moles. Once it can do this, it could then be given more images of moles on skin, but with some of these being cancerous (such as melanoma). Initially, the algorithm would be informed about which images represent cancerous conditions and which do not. It then begins to discern the patterns and correlations between the images to distinguish between cancer and benign moles. The more data the algorithm is given, the better it gets until it can successfully and accurately detect skin cancer from an image. Consequently, AI has the potential to become very powerful.

However, while the promise of AI in healthcare is immense, it’s imperative to approach this developing field with a sense of caution and ethical responsibility. The integration of AI systems into patient care presents a myriad of ethical dilemmas. How do we ensure that the data used to train these algorithms is representative of the diverse patient populations it will serve? How do we address potential biases in the algorithms, ensuring they do not perpetuate or amplify societal inequalities? Furthermore, the transparency and explainability of AI decisions are of paramount importance, especially when human lives are at stake. Patients, clinicians, and other stakeholders need to trust and understand how AI makes its decisions. If a mistake occurs, the responsibility and accountability aspect of these AI systems must be clearly defined.

In conclusion, Artificial Intelligence holds transformative potential for the healthcare sector. From refining diagnostics to tailoring patient-specific treatments, the capabilities are vast. However, the success of AI in healthcare rests on the careful and ethical implementation of these systems. It’s not just about the sophistication of the algorithms, but also about the humanity and empathy with which they’re integrated into patient care. As we move forward, striking a balance between innovation and ethical responsibility will be key to harnessing the true potential of AI in healthcare.

Links

https://www.nihr.ac.uk/documents/artifiicial-intelligence-in-health-and-care-award-ai-definition/26007

GPs advised to use QRISK3 CVD score for certain patient groups

https://wessexahsn.org.uk/img/news/AHSN%20Network%20AI%20Report-1536078823.pdf

https://transform.england.nhs.uk/media/documents/NHSX_AI_report.pdf

https://wellcome.org/sites/default/files/ai-in-health-ethical-social-political-challenges.pdf

Issues Surrounding Black Box Algorithms in Surveillance

First published 2021; revised 2023

The rapid advancement of technology has transformed the landscape of surveillance, enabling the collection and analysis of vast amounts of data for various purposes, including security and law enforcement. Black box algorithms, also known as opaque or inscrutable algorithms, are complex computational processes that generate outputs without offering clear insights into their decision-making mechanisms. While these algorithms have demonstrated impressive capabilities, their use in surveillance systems raises significant concerns such as issues of transparency, accountability, bias, and potential infringements on civil liberties.

One of the primary problems with black box algorithms is their lack of transparency. These algorithms make decisions based on intricate patterns and correlations within data, which makes it difficult for even their developers to fully comprehend their decision-making processes. This opacity prevents people under surveillance from understanding why certain actions or decisions are taken against them. This lack of transparency raises questions about the legitimacy of the surveillance system, as people have a right to know the basis on which they are monitored.

The complexity of black box algorithms also creates challenges in attributing responsibility for any errors or unjust actions. If a surveillance system using black box algorithms makes wrong outcomes or infringes a person’s rights, it becomes challenging to hold anyone accountable. This accountability gap undermines the principles of justice and fairness and leaves people without recourse in case of harm.

Black box algorithms can inherit biases present in the data they are trained on. Surveillance systems using biased data can perpetuate and even amplify existing societal biases, leading to discriminatory outcomes. For example, if historical data reflects biased policing practices, a black box algorithm trained on such data might disproportionately target certain demographic groups, exacerbating social inequalities and eroding trust in law enforcement agencies.

The use of black box algorithms in surveillance also raises concerns about privacy and civil liberties. When these black box algorithms analyse and interpret personal information without clear guidelines, they may invade people’s privacy rights. As surveillance becomes more pervasive and intrusive, people might feel like their fundamental rights are being violated, which might cause societal unrest and resistance to the use of surveillance using black box algorithms.

The implementation of black box algorithms in surveillance often happens without enough public oversight or informed consent. This lack of transparency can lead to public mistrust because people are left in the dark about the extent and nature of the surveillance practices employed by authorities. Effective governance and democratic control over surveillance are compromised when decisions are made behind a shroud of complexity. To address these issues, it is essential to strike a balance between technological innovation and safeguarding individual rights. Policymakers, technologists, and civil society must collaborate to develop comprehensive regulations and frameworks that ensure transparency, accountability, and the protection of civil liberties in the ever-evolving landscape of surveillance technology.

Links

https://www.eff.org/deeplinks/2023/01/open-data-and-ai-black-box

https://towardsdatascience.com/black-box-theres-no-way-to-determine-how-the-algorithm-came-to-your-decision-19c9ee185a8

https://policyreview.info/articles/analysis/black-box-algorithms-and-rights-individuals-no-easy-solution-explainability

The Challenges of Employing Black Box Algorithms in Healthcare

First published 2021; revised 2023

In recent years, the healthcare industry has been rapidly adopting artificial intelligence (AI) and machine learning (ML) technologies to enhance diagnostics, treatment plans, and patient outcomes. Among these technologies, black box algorithms have gained attention because of their ability to process vast amounts of complex data. However, the use of black box algorithms in healthcare also presents a range of significant problems.

Black box algorithms, often based on deep learning models, have shown remarkable accuracy in various applications. They can autonomously learn patterns from data, allowing them to make predictions and decisions. Currently, most AI algorithms are referred to as ‘Black Box’ systems because these algorithms produce results without offering insight into the reasoning or logic behind their predictions. It is therefore impossible to know how the algorithm came its conclusion. The algorithms are therefore thought of as impenetrable black boxes, where data is inputted and then a result is outputted. For example, when a person notices they have a friend recommendation on Facebook for somebody they don’t know, not even a Facebook engineer is able to explain how it happened. This introduces an inherent lack of transparency into the system.

While this opacity might be acceptable in certain domains, it poses challenges in healthcare. If a Black Box AI system makes a diagnosis, how it reached the diagnosis is unknown. However, when a doctor reaches a diagnosis, they are easily able to justify their decision and show the working steps they took to reach the diagnosis. Doctors agree to the Hippocratic Oath (to do no harm) and are held professionally accountable for their actions when treating a patient. Just as each individual doctor is accountable for their actions, the NHS as a whole must be able to justify its actions in the treatment of patients. Black box systems pose an issue to accountability as their actions cannot be justified, and this in turn poses an issue to due process and medico-legal actions.

One of the most pressing problems of employing black box algorithms in healthcare is their lack of transparency and interpretability. In the medical field, understanding the rationale behind an algorithm’s decision is crucial. To be used in a healthcare setting, AI systems must be transparent (opening the black box) and interpretable. Patients, doctors, and regulatory authorities require transparency in order to trust the recommendations made by AI systems. Black box algorithms, by their very nature, hinder the ability to explain why a certain diagnosis or treatment plan was suggested. This opacity can lead to scepticism, hinder accountability, and create ethical dilemmas. Transparency is necessary if healthcare workers and the public are to be able to trust the AI systems. Transparency is also required for good governance, and to ensure that healthcare professionals are not held liable for ‘mistakes’ made by AI if it is used to diagnose, or to help diagnose, conditions.

A proposed method of creating a transparent system is to show the relative importance of different aspects of a specific piece of data by using a saliency/attention map. The problem with attention maps is that they don’t explain anything other than where on the data set the AI is looking. They often use colour shading to represent areas of importance, with red the most important, and blue the least. If attention maps are used in a medical application, they could lead to misdiagnoses or no diagnosis at all, and even lead to conformation biases in high stake decision making. Clearly, attention maps are not sufficient for AI justifiability, at least not on their own. Currently, other methods of AI justifiability and interpretability are being explored, with an aim to produce a more robust solution for transparency.

Black box algorithms can inadvertently inherit biases present in the training data. In healthcare, biased algorithms could result in unequal treatment recommendations based on factors like gender, race, or socioeconomic status. This perpetuates disparities in patient care and can lead to severe consequences, both in terms of individual patient outcomes and broader societal implications. Ensuring fairness and equity becomes challenging when the decision-making process remains hidden within a black box. Ultimately, ‘Algorithmic Accountability’ is the recognition that the AI mistakes or wrongdoings are fundamentally because of human design and development. For autonomous and semiautonomous systems to be widespread within a future NHS, there needs to be a clear legal course of action for the event of AI wrongdoing, protocols for use, and good governance in place.

Moreover, black box algorithms require large amounts of data for accurate predictions, and there’s a significant risk associated with data privacy and security. The healthcare industry is a prime target for cyber-attacks, given the sensitive nature of medical records. The integration of AI and ML into healthcare means an increasing amount of patient data is stored and processed digitally. While AI can greatly benefit diagnostics and treatment, there’s a potential for misuse, especially when the decision-making process is not transparent. Ensuring data integrity and patient confidentiality while using AI tools is paramount. It is vital for developers and the healthcare community to collaborate in establishing robust data protection measures, continuous monitoring, and swift response mechanisms to any breaches or irregularities.

In conclusion, while the promise and potential of AI and ML in revolutionising the healthcare landscape are undeniable, the challenges posed by black box algorithms cannot be overlooked. The key to their successful integration lies in ensuring transparency, accountability, and equity. As technology continues to evolve, a multi-disciplinary approach involving technologists, medical professionals, ethicists, and policymakers will be essential to harness the benefits of AI while mitigating its risks. A future where AI aids healthcare without compromising trust or ethical principles is not just desirable but essential for the continued advancement of medical science.

Links

https://www.facs.org/for-medical-professionals/news-publications/news-and-articles/bulletin/2023/july-2023-volume-108-issue-7/black-box-technology-shines-light-on-improving-or-safety-efficiency/

https://doi.org/10.5281/zenodo.3240529

https://www.investopedia.com/terms/b/blackbox.asp

https://doi.org/10.7326/0003-4819-124-2-199601150-00007