The Keep It Simple Guide

This guide is for everyone, updated as we can, and as we are asked to. Start here to begin learning about AI, Technology, Business Change, Innovation and Transformation. No entry knowledge required.

AI (Artificial Intelligence)

AI stands for Artificial Intelligence. Imagine your brain as a super-complex computer that helps you think, make decisions, and solve problems. AI is like a much simpler computer brain designed to do specific tasks that usually require human intelligence. These tasks can range from understanding spoken words to playing chess or even recognizing patterns in data.

AI is not as advanced as the human brain but is getting better at mimicking certain human abilities. It's like a parrot that can mimic human speech but doesn't fully understand what it's saying.

AGI (Artificial General Intelligence)

AGI stands for Artificial General Intelligence. Unlike regular AI, which is like a specialized chef who's really good at making just one dish, AGI is like a master chef who can cook any dish you can think of. In other words, AGI has the ability to understand, learn, and apply knowledge across different tasks, just like a human can.

While AI focuses on doing one thing really well, like recognizing faces in photos, AGI aims to be a jack-of-all-trades, capable of learning and adapting to new tasks on its own. However, as of now, AGI is more of a goal than a reality; we haven't created it yet.

Narrow AI versus AGI

Narrow AI vs. AGI refers to the difference between specialized and general-purpose artificial intelligence.

Imagine a toolbox. Narrow AI is like a single tool in that box, say a hammer. It's excellent for nails but not much else. AGI, on the other hand, would be like having an entire toolbox that can adapt and create new tools as needed. It can handle a variety of tasks, from hammering nails to tightening screws.

Narrow AI is what we mostly have today—systems that are really good at one specific thing, like recommending movies or translating languages. AGI is the goal for the future: a system that can perform any intellectual task that a human can do.

LLM (large language model)

A Large Language Model, often abbreviated as LLM, is a computer program designed to understand and generate human-like text based on the data it has been trained on. Imagine it as a really good writer who has read tons of books, articles, and websites, and can now help you write essays, answer questions, or even chat like a human.

It's like having a super-smart typewriter that not only types what you say but can also suggest what to write next, answer your questions, and even create stories or poems.

Business Process

A business process is a set of steps or tasks that a company follows to accomplish a specific goal. Think of it like a recipe for baking a cake. The recipe outlines the ingredients you need and the steps to follow, like mixing, baking, and decorating, to end up with a finished cake.

In a business context, a process could be anything from hiring a new employee to delivering a product to a customer. Just like following a recipe ensures you get a good cake, following a business process ensures that tasks are done consistently and efficiently, leading to better results for the company.

Digital transformation

Digital transformation in a business context means using technology to improve or change the way a company operates. Imagine a traditional library that only has physical books. If this library starts offering e-books, online catalogs, and a mobile app for easier access, it's going through a digital transformation.

The goal is to make processes more efficient, improve customer experience, and stay competitive. For example, a retail store might start selling products online in addition to its physical location, or a bank might offer mobile banking services.

Knowledge Worker

A knowledge worker is someone whose job involves handling or using information, rather than performing manual labor. Imagine a detective who solves cases by piecing together clues and information. In the same way, a knowledge worker uses their expertise to analyze data, solve problems, or create new ideas.

Examples include software developers, doctors, researchers, and business analysts. These people rely on their skills and knowledge to perform their tasks.

Generative AI

Generative AI is a type of artificial intelligence that can create new content, like text, images, or music. Imagine a painter who not only copies existing paintings but can also create entirely new artworks. Generative AI works similarly; it doesn't just analyze data but can generate something new based on what it has learned.

For example, it can write a story, compose a piece of music, or even generate realistic images. It's like having a creative assistant that can help you come up with new ideas or content.

Text to Image

Text-to-Image refers to the process of generating an image based on a description given in text form. Imagine telling an artist to draw a "sunset over a beach," and they create a painting that matches your description. Text-to-Image technology aims to do something similar but with a computer program.

This is often used in research and some advanced applications like virtual worlds or design software. It's like having a digital artist that can take your words and turn them into a visual representation.

Midjourney

Midjourney is a text to image generative AI platform. The get started guide is here

In Painting

In-painting in the context of text-to-image refers to filling in missing or changing parts of an image based on the surrounding area and the textual description provided. Imagine you have a puzzle with a few missing pieces. In-painting is like having an artist carefully paint the missing pieces to complete the picture, guided by both the existing pieces and a description you give.

For example, if you have an image of a beach but the sky is missing, and you describe the sky as "sunset colors," in-painting technology would fill in the missing sky with colors that match a sunset.

Context (in LLM’s and chatGPT)

In a conversation with language models like ChatGPT, 'context' refers to the surrounding text that the model uses to generate a relevant reply. Think of context like the history of a conversation between you and a friend. If you suddenly ask, "What do you think?", your friend needs to know what you were talking about before to give a meaningful answer.

Why is context length a challenge?

  1. Memory Limitation: These models have a 'word limit' for each conversation round. Imagine it like a notepad with limited space; if the conversation gets too long, older parts must be erased to make room for new text.

  2. Accuracy: Longer context could mean the model has more to consider, making it more likely to give off-topic or incorrect answers. It's like if your friend has to remember a 2-hour conversation before answering your question; they might get some details wrong.

Inference (in LLM’s and chatGPT)

Inference in the context of language models like ChatGPT refers to the process of generating a response based on the given input and context. Imagine you're playing a game of charades; your friend acts out something, and you have to guess what it is. Inference is like your guess, but for language models, it's about generating a text response based on the clues (input and context).

Inference Time is how long it takes for the model to make this guess or generate a reply. The quicker the inference time, the faster you get your answer. If your friend takes too long to guess in charades, the game becomes less fun, right? Similarly, if a language model takes too long, it's less useful for real-time applications.

Machine Learning

Machine Learning (ML) is a subset of Artificial Intelligence (AI). Think of AI as a big toolbox that has various tools to make computers smart. Machine Learning is one of those specialized tools. It's like a hammer in a toolbox; useful for certain jobs but not everything.

AI can include simple rule-based systems, like a thermostat that turns on the heat when the temperature drops below a certain point. Machine Learning, on the other hand, enables a computer to learn from data and make decisions or predictions. It's like teaching a dog new tricks; the dog learns from repetition and reward.

Language Learning Models (LLMs) like ChatGPT are a specific kind of machine learning model. They're specialized in understanding and generating human language. If Machine Learning is a general-purpose hammer, then LLMs are more like a finely crafted chisel, designed for detailed work on one particular material: language.

Mixture of Experts (MOE)

Mixture of Experts (MOE) in AI Large Language Models (LLMs) refers to a system where multiple specialized "expert" models work together to provide a solution. Each expert is good at specific tasks. The system decides which expert to consult based on the problem at hand.

Advantages:

  1. Specialization: Each expert is highly skilled in a specific area, leading to better results.

  2. Efficiency: Can be faster as each expert only focuses on what they're good at.

Downsides:

  1. Complexity: Managing multiple experts can be complex.

  2. Cost: More computational resources are usually needed for multiple experts

Multi-Modal

Multi-modal in the context of AI Language Models (LLM) like GPT-4 refers to the ability of the system to understand and generate different types of data, such as text, images, and sound, all within the same framework.

Example Usages:

  1. Customer Support: A multi-modal AI can analyze customer emails and listen to voice calls to provide support solutions.

  2. Content Creation: Generate text-based articles while also suggesting relevant images or graphs.

  3. Healthcare: Analyze both medical records and diagnostic images to assist doctors in diagnosis.

  4. E-commerce: Provide product recommendations based on textual reviews and image-based preferences.

  5. Education: Assess student performance through written assignments and oral presentations.

Shadow IT

Shadow IT happens when employees use technology like laptops or software for work tasks without coordinating with the IT department, which is there to ensure safe and effective tech use.

Risks

  • Security vulnerabilities, such as data leaks.

  • Regulatory compliance issues.

  • Additional workload for the IT team to manage and secure these unsanctioned tools.

Shadow AI

Shadow AI is similar to Shadow IT, but focuses on the uncoordinated use of AI tools, bypassing the IT department that could ensure their safe and effective use.

Risks

  • Compromised data privacy.

  • Difficulty in determining accountability for errors.

  • Additional complexities that could disrupt operations.

Example

Suppose a marketing team starts using an AI tool for customer analysis without coordinating with the IT department. Even if the tool offers valuable insights, it might store customer data in a way that does not align with the company's data security and compliance standards.

To sum up, Shadow AI is the uncoordinated use of AI technologies within a company. While IT departments in modern organisations aim to help employees adopt new tools safely, bypassing this coordination poses risks like security vulnerabilities, compliance issues, and additional complexities specific to AI.

Previous
Previous

Starting AI in Enterprise

Next
Next

Everyone can code | Part 1