<aside> 🔍 "Longtermism" is the view that improving the long term future is a key moral priority of our time. This can bolster arguments for working on reducing some of the extinction risks that we covered in the last section.

We’ll also explore some views on what our future could look like, and why it might be pretty different from the present. And we'll introduce forecasting: a set of methods for improving and learning from our attempts to predict the future.

Key concepts from this session include:

You will also practice the skill of calibration, with the hope that when you say that something is 60% likely, it will happen about 60% of the time. This is important for making good judgments under uncertainty.

</aside>

Required Materials

Hinge of History:

The case for and against longtermism:

To what extent can we predict the future? How?

What might the future look like?

Strategies for improving the long-term future (beyond reducing existential risks):

Exercise (45 mins., please complete this before your session)

Part 1 (15 mins.)

Helping in the present or in the future?

A commonly held view within the EA community is that it's incredibly important to start from thinking about what it really means to make a difference, before thinking about specific ways of doing so. It’s hard to do the most good if we haven’t tried to get a clearer picture of what doing good means, and as we saw in session 3, clarifying our views here can be quite a complex task.