Reading response

“A.I. research suggests instead that the threshold by which any particular arrangement of matter can be said to be “intelligent” doesn’t’have much to do with how it reflects humanness back at us”

This idea is interesting to me since it specifically put the word “humanness” on the table. This world is actually tricky since there is no clear definition about this word. In the article the author talks about that if the AI can harm people, that means that they should know what harm is and what difference between themselves and contemporary humans is.Machines and humans have different “minds”. As a result, how machines and humans process issues are obviously different. Machines makes the choice based on data, but humans do that based on their knowledge. However, what if machine, or the A.I. has enough samples to collect data and then make choices? Humans have discrimination to a certain group and that can be counted as part of humanness. When the A.I starts to collect data, they can absolutely know this existing hierarchy. Even though developers are not intended to give A.I emotions, the A.I will act like human because of the supporting data. Can that still be counted as “humaness” or not? 

Leave a reply

Skip to toolbar