If your twitter feed is anything like mine then you’ll have been inundated with people talking about the latest round of AI tools. ChatGPT, the latest release from OpenAI, just about broke the internet.

Screenshot 2022-12-20 at 11.11.41.png

In this post, we will spent a bit of time exposing why people are getting so excited (and scared) by this technology, before exploring some of the policy implications, which are wide-ranging and complex, and therefore not well understood.

If you want to get a bit more history on the development of the AI models that sit behind the tools we talk about today, there is a good short summary of GPT-1 to 3 here. But the headline is that in 2015, Elon Musk, Sam Altman and other investors including Reid Hoffman and Peter Thiel created OpenAI - a research company that would make its AI research open to the public. The first major paper was published in 2018 which led to the first version of Generative Pre-trained Transformer (GPT) software. These language models are basically text prediction models - you enter a prompt and the algorithm spits out what it thinks should come next having being trained on a massive data set. GPT-1 was released in 2018, GPT-2 in 2019 and GPT-3 in 2020. GPT-4 is expected later this year and is set to be an even bigger leap forward (though it’s under very tight wraps in terms of what that means). OpenAI is reportedly aiming for $1bn in revenue by 2024.

Screenshot 2022-12-20 at 13.58.21.png


It’s worth taking a look at what people are doing with these language models. Here are a bunch of examples to pique your interest…

  1. ChatGPT → as mentioned above, ChatGPT interacts in a conversational way. Ask it anything and it will return a full response, including code. Here’s one based on our namesake, Thingvellir National Park in Iceland:

Screenshot 2022-12-20 at 11.22.37.png

  1. Search → This leads directly to search. In many ways the natural extension of Ask Jeeves (for those that remember it…), GPT is being used to curate new Google like search engines based on a subset of search data. For instance:

    1. Metaphor is a general search engine, designed to create links not text.
    2. Elicit is designed specifically for academic research and provides summaries of research papers.
    3. And there are models built for specific sectors too, like this for biomedicine.

    BVP have written a good post on the broader search category for those that want to dig deeper, including with the graphic below.

Untitled

  1. Writing → There are a host of writing aids now using AI to help draft, develop arguments or add creative ideas. My favourite so far is Lex. You can see Nathan describing how it works in the video below, and sign up to get access here.

https://www.youtube.com/watch?v=7Cao0oy1CBg

  1. Image generation → There are numerous companies developing text to image technology.

    1. DALL.E 2 is OpenAI’s version, which is great. Definitely play around with it…
    2. But there are others, including Midjourney and Stable Diffusion.
    3. And you can also generate images by training AI on your own art.

    https://twitter.com/notiansans/status/1605700201053765632?s=46&t=VdHDoHul4pwMrwLbCDaqrQ

  2. Email → automating various responses in different styles.

https://twitter.com/JamesIvings/status/1602855048148500480?s=20&t=HdfRWN1Fo7e5V0N-5UZUoA

  1. Presentations → for instance, this has just been launched by Tome. (Definitely watch the video in the tweet - it’s pretty impressive looking).

https://twitter.com/hliriani/status/1602756026838568961?s=20&t=e-XnTqoQM9-5698AjjGPcw

  1. Coding → Replit recently released Ghostwriter, its AI coding feature. And OpenAI has Copilot with Github. But there are many others being developed too, including for debugging:

https://twitter.com/ntkris/status/1603780496831528967?s=20&t=JZHkitikK7wOeH4_QGs6ow