Explore 1.5M+ audiobooks & ebooks free for days

Only $12.99 CAD/month after trial. Cancel anytime.

AI Fundamentals Explained
AI Fundamentals Explained
AI Fundamentals Explained
Ebook160 pages1 hour

AI Fundamentals Explained

Rating: 0 out of 5 stars

()

Read preview

About this ebook

Welcome to my new book called AI Fundamentals Explained. My name is Brian Mackay and I have worked in the IT industry since 1997 when I started at BT Internet helpdesk in my home town of Thurso, in the county of Caithness on the north coast of Scotland. I then went on to work at BT Global Services, Nildram in Buckinghamshire, England, then I moved to Edinburgh, Scotland where I worked for Scottish and Newcastle UK the Heineken UK, NHS Quality Improvement Scotland, NHS Lothian, Bodycote Plc and BSKYB service desk. I passed my Masters in Cybersecurity from Edinburgh Napier University in 2019 and now work as a Cybersecurity consultant for The Scotcoin Project CIC. This book starts off by looking at the early days of AI and machine learning and moves on to the various types of AI today such as, what are LLM's, what is ethical AI, AI legislation, Chat GPT and what is Generative AI, how AI will benefit and be a challenge to the cybersecurity industry, then finally looks at the potential future of AI such as quantum AI.
LanguageEnglish
PublisherLulu.com
Release dateAug 23, 2024
ISBN9781304085726
AI Fundamentals Explained

Related to AI Fundamentals Explained

Related ebooks

Computers For You

View More

Reviews for AI Fundamentals Explained

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    AI Fundamentals Explained - Brian Mackay

    Table of Contents

    Chapter 1 - Early history of Artificial Intelligence and Machine Learning

    Chapter 2 – Introduction

    Artificial Intelligence

    Weak AI

    Strong AI

    Machine Learning

    Deep Learning

    Natural Language Processing

    Computer Vision

    Chapter 3 – The benefits and challenges of AI to Cybersecurity?

    Chapter 4 - AI Legislation

    Chapter 5 - What is Machine Learning?

    Supervised Learning

    Chapter 6 - What is Deep Learning?

    Chapter 7 - What is Ethical AI?

    Chapter 8 - What is Gen AI and how will it affect you?

    Chapter 9 - What exactly are LLM's?

    Chapter 10 - What is Quantum AI?

    Chapter 11 - The Next Generation of Generative AI

    Chapter 12 - What is Super AI?

    Chapter 13 - What is Dark GPT and Hacker GPT?

    13.1 – Dark GPT

    13.2 - Hacker GPT

    Chapter 14 - What is Agentic AI?

    Chapter 15 - What is Shadow AI?

    Chapter 16 - AI governance

    Chapter 17 - An educated prediction of the future of AI

    References

    ISBN - 978-1-304-08572-6

    Imprint: Lulu.com

    Copyright – © Brian Mackay 2024

    Chapter 1 - Early history of Artificial Intelligence and Machine Learning

    Here's a brief overview of some key events and milestones in the early history of Artificial Intelligence (AI) and Machine Learning:

    In 1943, Warren McCulloch, a neurophysiologist, and Walter Pitts, a logician, published a pioneering paper titled "A Logical Calculus of the Ideas Immanent in Nervous Activity." This work is often considered one of the foundational moments in the history of artificial neural networks and, by extension, artificial intelligence.

    Key Contributions of McCulloch and Pitts: The McCulloch-Pitts Model of Neurons: McCulloch and Pitts introduced a simplified model of a neuron, which aimed to mimic the behaviour of biological neurons in the human brain. Their neuron model could receive multiple inputs, process them, and then produce an output (like firing or not firing) based on a threshold function.

    Inputs were considered as binary values (0 or 1), representing either the presence or absence of a stimulus. The neuron summed these inputs and, if the sum exceeded a certain threshold, the neuron would fire (output 1). If the sum were below the threshold, the neuron would remain inactive (output 0).

    Boolean Logic

    The McCulloch-Pitts neuron model was rooted in Boolean logic, which helped describe neural activity in terms of simple logical operations such as AND, OR, and NOT.

    Their model demonstrated that networks of these simple neurons could compute any function that a logical circuit could. This insight suggested that complex brain functions might be reducible to a combination of simple logical operations performed by neurons.

    Artificial Neural Networks

    McCulloch and Pitts theorized that large networks of interconnected neurons could perform complex tasks by working together. This idea was foundational for the development of artificial neural networks, a key aspect of modern machine learning and AI.

    Universality

    One of the key results of their work was the demonstration that, in principle, such neural networks were universal in computational power. That is, given enough neurons and appropriate connections, they could compute any function that could be computed by a machine.

    Impact on Later Developments: Influence on AI and Cognitive Science.

    Their work laid the groundwork for understanding how computational models could represent brain activity and cognitive processes. It inspired later researchers like John von Neumann and the development of both AI and cybernetics.

    Perceptron and Deep Learning

    The McCulloch-Pitts neuron is considered a precursor to later artificial neural networks, such as the perceptron (developed by Frank Rosenblatt in 1958). Modern deep learning, which involves complex multi-layer neural networks, traces its lineage back to this early work.

    In summary, McCulloch and Pitts' 1943 model was a groundbreaking attempt to formalize the functioning of the brain in computational terms, helping to bridge neuroscience, logic, and computer science. It paved the way for the development of neural networks and artificial intelligence. (Abraham, 2002).

    In 1950, Alan Turing, a British mathematician and pioneering computer scientist, proposed the Turing Test in his paper titled "Computing Machinery and Intelligence." This test was designed as a way to assess a machine’s ability to exhibit intelligent behaviour that is indistinguishable from that of a human. It remains one of the most well-known concepts in the philosophy of artificial intelligence (AI).

    Key Elements of the Turing Test - The Imitation Game

    Turing originally framed the test in the form of an Imitation Game. In this game, there are three participants: a human judge, a human participant, and a machine. The judge is in one room, separated from the human and the machine, and can only communicate with them through a text-based interface.

    The judge’s task is to determine which participant is human and which is the machine, based solely on the answers given to their questions. The machine tries to imitate human responses to fool the judge into thinking it is human.

    Machine Intelligence

    According to Turing, if the machine can convince the human judge that it is human through conversation—meaning the judge cannot reliably distinguish between the human and the machine—the machine can be said to exhibit intelligence.

    The test does not evaluate the machine’s ability to think or understand as a human does. Instead, it focuses on whether the machine’s behaviour (in terms of responses) can be indistinguishable from human behaviour.

    Behaviourism and Practicality

    Turing sidestepped the question of whether machines can truly think, which he viewed as a philosophical debate with no clear answer. Instead, he adopted a behaviourist approach, focusing on the external manifestation of intelligence rather than its internal processes. The Turing Test thus evaluates only observable behaviour rather than introspective mental processes.

    Turing believed that

    Enjoying the preview?
    Page 1 of 1