<aside> <img src="/icons/chat_lightgray.svg" alt="/icons/chat_lightgray.svg" width="40px" /> I first simply requested what was listed in the prompt, because I wonder what kinds of poem it tended to generate randomly. Actually, I was waiting for the verse that would make me fall in love at first sight.

截屏2023-09-24 下午3.10.31.png

And the answer was not bad for me. Maybe because of “simple” and “meaningful” ?

截屏2023-09-24 下午3.15.31.png

Wait? How you can say that “In this age of AI, we truly come alive.” I felt like it was praising himself.

截屏2023-09-24 下午3.24.07.png

You should be more confident ChatGPT! You did not apologize for that at all! ;-D

Then I found I made a mistake, and that caused the misunderstanding.

So I changed the expressing in the second question.

截屏2023-09-24 下午3.28.17.png

Okay that was a great response. Back to the poem, I added a few requirement to adjust it. I felt that the ending and its meaning were so common, and wanted it to generate different points from others.

截屏2023-09-24 下午3.46.00.png

This version was fine, but I like the first version. Maybe because my requirement was not specific enough, I didn’t give it examples so that it didn’t know what common poem would talk about and how to avoid them.

Note:

The elements used to compose the poem were the same as many other students did. It uses the “canvas”, “algorithm” , “gleam”…

</aside>

<aside> <img src="/icons/chat_lightgray.svg" alt="/icons/chat_lightgray.svg" width="40px" /> Matthew Kirschenbaum: Prepare for the Textpocalypse (The Atlantic, March 8 2023)

For the credibility of the content:

• Actually, this reminds me of Wikipedia, where every user can edit the content, and the content is accessible to almost everyone. For me, Wikipedia provides me with convenient access to a vast array of information on specific terms, and it was a great idea to let everyone contribute to it. However, students are not allowed to use the sources on Wikipedia in our papers, because our professors told us the content can be unreliable since everyone can edit it. The issue with Wikipedia lies in its open editing policy, which often leads to inaccuracies, bias, and even vandalism. Even without AI, it can turn like this. And now it is still useful for providing background information. What if with data/LLM models?

“What if, in the end, we are done in not by intercontinental ballistic missiles or climate change, not by microscopic pathogens or a mountain-size meteor, but by … text? Simple, plain, unadorned text, but in quantities so immense as to be all but unimaginable—a tsunami of text swept into a self-perpetuating cataract of content that makes it functionally impossible to reliably communicate in any digital setting?”

It could be interesting if we define the word “text” in different ways.Till now, this could be the information provided online, the website, the email, and so on. However, maybe in the future, with the development of technology, the text will not be limited to it. It could be any information we get through different ways. For example, the things we see, touch, eat, and perceive. Of course, it creeps me out to think that AI with LLM can change these things.

Temperature Parameter: The study used a "temperature" parameter to control the variability of ChatGPT-4's responses. A lower temperature (temperature = 0) was used to make the responses more deterministic, although some variability was still observed. To some degree, temperature is like people’s state of mind, when it is low, then I’m emotionally stable, and can deal with things smoothly.