Here is a list of open problems proposed by researchers to address concerns related to security in the current development of artificial intelligence. If you are working on a writing project, Charbel can help you develop a plan and find more resources if you need them.
- These projects aim to study the risks associated with the development of general artificial intelligence, mitigate these risks, or conduct theoretical and empirical studies on the associated risks.
- These projects are designed to be accessible and assist individuals in making progress on unresolved problems.
- The projects will be presented at the Turing seminar in France, as well as at the ML4Good bootcamp and AI Safety Sweden.
If you are interested in any of these projects, Charbel can mentor you and provide additional resources. The document may be a bit messy in places, but that's normal.
During a project:
- Step 1: Before the hackathon, choose a project and your partner if you want to work in a group. The list of already taken projects is available here. You can discuss the topics with a seminar supervisor at the end of a class. To form your group:
- For a writing project, a group can consist of one or two people. So, working alone is also possible.
- For a coding project, a partner is the ideal size. Ultimately, I think a project with three people is not a good idea.
- Register here in advance if possible (list multiple
- Step 2: Saturday 10 am - start of the hackathon. Read the resources and discuss with a member of Effisciences to get more details, links, and additional leads.
- Step 3: Saturday afternoon - Send your plan as a bulleted list to a supervisor of the Turing seminar (Charbel, Jeanne, Léo).
- Step 4: Have fun and ask questions regularly, at least 3 per day 🙂
- Step 6: Submission: shared Google document, unless otherwise specified, of about 5 pages of high quality, in French or English. The topics are theoretically calibrated to be achievable in 2 days, and it is possible to submit a good draft by Sunday at 8 pm.
Technical AI Safety
Technical Risk Study
- Probability of deceptive alignment
- What is the probability that AI poses catastrophic risks assuming there is no deceptive alignment?
- What would a super intelligence be able to do? Give estimations, be rigorous, and link this to risks