<aside> <img src="/icons/chat_lightgray.svg" alt="/icons/chat_lightgray.svg" width="40px" /> Here is the link! https://editor.p5js.org/Lisa-HuangZijin/sketches/GFlEQ64Ya Phone and ID cards are of similar size and shape, so probably the color and layout become the major point for AI to tell them apart. Due to this, I tried to keep other things same in the photo to avoid AI using other information that could be meaningless.
</aside>
<aside> <img src="/icons/chat_lightgray.svg" alt="/icons/chat_lightgray.svg" width="40px" /> Anya’s work
I tried Anya’s work. It is great to see that Anya set a classifier named “unknow” to cover the situation where there is no hand holding a phone or ID card in the frame! That’s so great! When testing, I found it works well with my ID card. When it comes to my phone, it is not that ideal. It keeps recognizing my phone as “unknown”, no matter if I show the front or the back of the phone, or hold it close and far. Maybe the sample used to train AI doesn’t contain a white phone? It could be annoying because it is hard for us to collect all the colors of the phone and take pictures of them. ID cards follow a standard so it could be easier to detect, but what if there are stickers of other colors on them? Whatever, I am happy that it doesn’t recognize my phone as an ID card, which happened a lot when I was testing mine.
</aside>
<aside> <img src="/icons/chat_lightgray.svg" alt="/icons/chat_lightgray.svg" width="40px" />
One reason should be that people do not know enough about AI even though we create them. From a human point of view, we can’t adjust the data perfectly for AI tools. In that case, we could forget to remove invalid information like hospitals’ text font or use the completely wrong data that only humans can quickly figure out. I don’t dare to say that we are too confident about ourselves to make it. Because there are factors that amplify AI tools’ failure, such as the fact that the price we pay is human life instead of money, too little time left for us to react. But the current situation seems to be, that the data we use to train AI still need delicate human handling, and the model requires our development. We cannot give up our brains and hands just yet.
</aside>
<aside> <img src="/icons/chat_lightgray.svg" alt="/icons/chat_lightgray.svg" width="40px" /> Finished! I have to say that I was surprised by “Facebook’s machine learning tools predict your preferences better than any psychologist.” I do believe AI is very good at this, but that is too absolute. I used to think that we had the hardest time making predictions about humans. Because their behavior can have no correlation to what has happened, and there are certain logics that no one understands but themselves. As for the theory, there could be plenty of them waiting to be discovered. But I’m wondering if there can be one that calculates the trajectory of my life so that I won’t be so worried about the future, and the same goes for AI :-)
</aside>
<aside> <img src="/icons/chat_lightgray.svg" alt="/icons/chat_lightgray.svg" width="40px" /> I have seen Time Warp Scan Filter Effect, Pixar Cartoon Character Filter, Teenage Filter and Green Screen Filter. Among them all, Time Warp Scan Filter Effect can be the most playful and Teenage Filter is the one that makes me cry, because it is more than giving you an answer to a certain question.

Nowadays many camera filters I’ve seen only artificially sharpen facial features, enlarge the eyes to an unnatural extent, and shrink the mouth to create an exaggerated and unrealistic portrayal of beauty. This is very boring and rigidly enforces people's beauty standards, forsaking the diversity of beauty. I was told that Teenage Filter did much more than that and I appreciate the hard work of the creators.

</aside>