<aside>

Summary |

The different levels and kinds of AI Technologies, how intelligent they currently are, the risks, and what would enable them to improve even more /the limitations they face when it comes to advancement.

Aligning AI to goals requires interdisciplinary collaboration: both technical research and philosophical/public guidance because AI is a special type of technology that has a higher degree of freedom compared to things like a pencil or hammer. This autonomy requires a special approach that considers the risks and also invites a more sensitive attitude towards it. In order to produce technologies that are socially good then we must look at the values of humans and align them to the goals of developing AI.

</aside>


<aside>

Keywords -

Superintelligent: Anything more intelligent than humans

deep learning: find associative patterns in gigantic collections of data.

Reinforced Learning (RL): a method where an "agent" learns to make decisions to achieve a goal through trial and error, receiving rewards or penalties for its actions.

<aside> ✔️

Study Questions:

  1. ask your team mates for two things that they think you could have done better </aside>

Electronics watch

<aside>

Reading notes

What is AI Superintelligence

The Challenge of Value Alignment: from Fairer Algorithms to AI Safety

</aside>

<aside> 📌

Generative AI & LLMs, AI for societal good, SDGs

Goal of Artificial Intelligence (Discipline): Machines can possess the intelligence that humans have

DALL-E: AI image generation using written prompts

Sora: AI video generation

</aside>

<aside> 📌

Foundation Model

Computer Vision Model (CV): processes visual information (images/videos)

Natural Language Processing Model (NLP): processes human language (text/speech).. . ****and orchestrating backend functions

Examples of Foundation Models

Resources used to train:

GPT-3

• 50,257 vocabulary size • 2048 context length • 175B parameters • Trained on 300B tokens • Thousands of V100 GPUs • Months of training • Millions of USD

LLaMA (open source model)

• 32,000 vocabulary size • 2048 context length • 65B parameters • Trained on 1-1.4T tokens • 2,048 A100 GPUs • 21 days of training • $5M USD

Advantages of Foundation Models

[weeeeeee]

Ethical Dilemmas for FMs

Data

What data IS included and what data must be omitted

eg Copyrighted data, proprietary data must be omitted from training. How do we remove this data from the training?

Output

Must think about the possibility of “bad” or “incorrect data” and figure out how to reject/ stop it from being produced

Usage

LLM Security: Jailbreak, Prompt injection, Data poisoning

Known Issues

Hallucinations

ChatGPT making stuff up

Data Leakage

If you are an employee at XYZ company and decide to use generative AI on corporate data your input will be used for training as well. This is a breach of security for the company.

Bias, Toxicity, and, Harms

Deekfake

Generated videos and images that are highly realistic used to scam or misinform

Tools: AI incident database

https://incidentdatabase.ai/

</aside>

<aside> 📌

LLM

Training Recipe

</aside>

<aside> 📌

AI Alignment

Alignment in this field of study means to align the technology goals to human values

Reinforced Learning from Human Feedback (RLHF)

Future of AI

image.png

</aside>

<aside> 📌

Sustainable Development Goals

Jasmin Lewis

Global Ethics Frameworks and Sustainable Procurement

ISO 20400 Sustainable Procurement

“Procurement that has the most positive environmental, social and economic impacts possible over the entire life cycle and that strives to minimise adverse impacts”

The Impacts of Technology

Technology can impact:

Environment:

social

governance

Modern Slavery Regulatory Framework

Modern Slavery Act 2018. Businesses must:

Risks for purchasing electronic hardware and offshore IT consulting services:

(India, China, and South-East Asia)

eg. Solar Panels (polysilicon components) linked to XinJiang Region - forced labour

</aside>

<aside>

</aside>