Last update: March 9, 2022

https://tombewley.com/images/profile.png

<aside> πŸ‘‹ Hello! Welcome to this open repository of my (Tom Bewley's) ongoing research and plans for the future regarding my PhD research into Explainable AI for Black Box Autonomous Agents. Primarily, this is an experimental mechanism for improving my own organisation and self-accountability, but I'm also secretly hoping that others may one day see it, and that it might even be a springboard for discussion and collaboration. Who knows!

It's being hosted on the superb Notion platform, which means you're able to add comments here if you have an account of your own.

</aside>

πŸ“” Weekly Logbook

<aside> πŸ’‘ The logbook contains a relatively refined summary of the work I complete each week, which usually corresponds to what I present in meetings with my supervisors.

</aside>

πŸ“Š Results

<aside> πŸ’‘ This section contains a less curated dump of results: figures, tables and graphs.

</aside>

πŸ•“ Status

<aside> πŸ’‘ The status gives a very brief and high-level overview of what I'm working on at any given time. I expect this will be updated every few weeks.

</aside>

❓Question Hierarchy

<aside> πŸ’‘ This hierarchy of questions and comments aims to capture the essence of my research agenda, and evolving thoughts on the order of priorities.

</aside>

πŸŽ“ Long View

<aside> πŸ’‘ This is my current, and almost-certainly-too-ambitious, view of how the next few years may be split up into three phases of research.

</aside>

Part A: Understanding Existing Black Box Agents

10/19β†’03/20: Starting exploration of problem space with a focus on multiagent systems, leading to basic experiments in traffic environment.

04/20β†’08/20: Deeper reflection on key issues in post hoc agent interpretability, including Criticality, eventually leading to TripleTree/Trees on Demand model.

08/20β†’10/20: TripleTree/Trees on Demand experiments and write-up.

09/20→ xx/xx: Online→Active Learning.

xx/xx β†’ xx/xx: Explaining evolving policies.

xx/xx β†’ xx/xx: Interpretable state representation learning.

Part B: Harnessing Interpretability to (Re)design Self and Other

xx/xx β†’ xx/xx: Adversarial policy inversion.

Part C: Self-interpretation and Theory of Mind

πŸ“… Calendar

<aside> πŸ’‘ This calendar view displays all entities containing dates within this repository.

</aside>