<aside>
Summary
The whole goal of innovation and engineering is to benefit human performance/productivity. There needs to be a balance between automation and human control if we want to develop safe, trustworthy, and reliable products that advances human progress. This content focuses on the HCAI framework
</aside>
<aside>
Resources
</aside>
<aside>
Keywords -
• Accountability
• Privacy
• Transparency
• Explainability
• explicability: it can be explained and understood because it is logical or sensible
• RST: Reliable, Safe, Trustworthy • Human Mastery: High human control, Low automation
• Requires rapid action: High automation, low human control
• XAI: Explainable Artificial Intelligence allows human users to comprehend and trust the results and output created by machine learning algorithms.
</aside>
<aside> 📌
Measures for Privacy
<aside> 📌
CONSENT CRITERIA:
<aside> 📌
OCCAM’s RAZOR
A problem-solving principle that states, when faced with competing explanations for the same phenomenon, the simplest one with the fewest assumptions is usually the correct one.
opposite: Hickam’s Dictum
</aside>
<aside> 📌
The criteria for trust/ Nature of expectation for people vs technology is different

</aside>
<aside> 📌
There are 4 main barriers to Accountability:
Black-box and white-box describe algorithms based on their level of internal transparency. They are difference in terms of balance between interpretability and transparency.
Black Box Algorithm
White Box Algorithm
other explicability measures (e.g. traceability, auditability and transparent communication on system capabilities) are used if explanation for a particular output is not possible

Ali et al, “Explainable Artificial Intelligence (XAI): What we know and what is left to attain Trustworthy Artificial Intelligence”, Information Fusion, 99, Nov 2023
</aside>
<aside> 📌
A user is exposed to certain ads → user interacts with ad → LLMs construct a detailed and sensitive demographic profile of the user = ⚠️ Privacy Issue
</aside>
<aside> 💡
A two-dimensional framework that separates the levels of automation from levels of human control
TL;DR - Balance human vs computer control in varying situations by assessing the benefit to humans and the risks of too much of one side
Methods of HCAI tend to produce designs that are Reliable, Safe, and Trustworthy
Previous one-dimensional list that suggested that increase in automation comes at the cost of lowering human control

Self driving car trade off Table (US Soc of Automotive Engineers)

Robin Murphy’s Law of autonomous robots: Any deployment of robotic systems will fall short of the target level of autonomy, creating or exacerbating a shortfall in mechanisms for coordination with human problem holders
Eg. Disaster from excess autonomy - Patriot missile sustem shooting down two friendly aircrafts during Iraq War
</aside>
<aside> 💡
</aside>
<aside>
</aside>
<aside>
</aside>
<aside> 💡
</aside>