Reading Response — Lavonne — Week10

This week’s reading <Invisible Image> is quite interesting and gives me a lot of sparkling ideas. The article provides me with a brand new perspective way to see the images on digital devices that I have never thought before.
We are so familiar with taking pictures, uploading pictures and download pictures, thanks to the development of technology. We use our phones to take pictures of ourselves, our daily life or almost everything. This is such a normal behavior that down deep in our lives without even give a second think of it—- What are these photos really are? Where will they go? When you upload it onto the internet, even you just post on your own blog or own social media, but do you even realize that how far could this image go and spread? What’s the invisible value inside of it?
Things are turning so different when it relates to the internet. In the past we take pictures through our camera and develop the film. All physical process. If you don’t share the picture and keep it in your home, it will go nowhere and be safe in your home; But now when you are taking picture through a digital device, even if you didn’t upload it on the internet, there is a potential to spread away. And what will it happen?
Also the capacity of a machine to see and understand images is developing so fast and rapidly. In the near past, the machine was just considered to be a dumb thing that can’t “understand” the meaning inside of the image but only can really the lifeless datas of it, such as: component shapes, gradients, luminosities, and corners. This was a very limit way to “understand” images.
“The earliest layers of the software pick apart a given image into component shapes, gradients, luminosities, and corners. Those individual components are convolved into synthetic shapes. Deeper in the CNN, the synthetic images are compared to other images the network has been trained to recognize, activating software “neurons” when the network finds similarities.”
Things have changed. I remember I watched a TED talk about <How AI been taught to read images> by Fei Fei Li. In the talk she mentioned that actually we are not far from the age that a machine could really “understand” and “read” images just the way as human do. We already found the approach and possible path towards that direction.
Our daily life with theactivities on digital device are also contributing to this process and feeding the machine “nutrition” to developing the ability to understand human’s visual language.
“When you put an image on Facebook or other social media, you’re feeding an array of immensely powerful artificial intelligence systems information about how to identify people and how to recognize places and objects, habits and preferences, race, class, and gender identifications, economic statuses, and much more”
The more images Facebook and Google’s AI systems ingest, the more accurate they become, and the more influence they have on everyday life. The trillions of images we’ve been trained to treat as human-to-human culture are the foundation for increasingly autonomous ways of seeing that bear little resemblance to the visual culture of the past.
It’s like we are feeding the machine (AI) —— Everyday, every seconds, more and more, faster and faster in the future. The learning and developing of AI reading pictures is willbe in a stupendously urprisingly high speed.”
 “Smaller and smaller moments of human life are being transformed intocapital, whether it’s the ability to automatically scan thousands of cars for outstanding court fees, or a moment of recklessness captured from a photograph uploaded to the Internet.”
In this way , it gives me a lot of thinking and insights , I came to this question to myself: Do you really think that the picture you are taking everyday is just a simple picture anymore?

Leave a reply

Skip to toolbar