Circulars

While Nayomi is splitting the documents to feed into the large language model, I have been searching for which large language model would be best suited to this work. Which has led me to the question, what criteria should we be judging an LLM on?

Is it about…

When thinking about the project as a whole, should we be…

The questions go on. Should we use tried and tested tools that we are familiar with (GPT, LLAMA), or do we engage in the newer tools that show promise (Falcon, which is suddenly at the top of Hugging Faces comparison leader board, or BLOOM, which is designed to be more environmentally friendly)? Is it ethical to try both a trusted tool and an experimental one, and compare the results?

Nayomi and I will soon actually be feeding data into a large language model, and so these questions will have to be swept aside in the name of ‘actually getting stuff done.’ But I still think they’re prudent, and welcome any thoughts.

Comms and computer vision

Don’t listen to me – go read Kaspar’s wonderfully detailed discussion of the work, instead: (1) Weaving Communication Objects (notion.so)