w9-reading response The red stack attack and ASI

Over the next decades, the acceleration of artificial intelligence, robotics, and genomics will change our planet and our humanity as well. We know that we should look forward, but we might not know how quickly or how deeply those changes will happen. We are sitting in a once-in-a-lifetime opportunity in humanity to actually proactively think and debate and discuss and act upon some of the changes that will happen to us. And yet I fear that too few of us actually understand what’s really happening and too few of us are actually having the conversation and too few of us are actually acting upon it. We need to start talking about morals and ethics when we talk about technology. The truth is technology has no morals. Morals are placed upon technology by its creators and by its users. As futurist Gerd Leonard recently wrote in his book technology vs. humanity: The fundamental challenge will be that while technology knows no ethics, norms, or beliefs, the effective functioning of every human and every society is entirely predicated upon them. These are the frameworks upon which our societies are built and we as technologists corporate corporations, governments and societies are pursuing these amazing advancing technologies without necessarily thinking about the moral and ethical overlay we’re placing upon them.

However, without actually thinking about adding moral layers to these algorithms they could actually lead us to the wrong decisions and the wrong outcomes. For instance, in the US, the Justice Department create an algorithm that was meant to help judges make the determination whether to grant a prisoner parole or not based on the likelihood that prisoner might commit another in the future. The algorithm ingested vast amounts of data and was meant to predict whether this individual versus another individual was a risk for society. The issue is that the algorithm was massively biased. Analysis by the organization called Pro Publica proved that the algorithm would forecast with 77% more likely that a black prisoner was more risky to a society than any other. And the bias was built into the algorithm by its creators.

Leave a reply

Skip to toolbar