Revised Bridge 4 Book Covers and Annotated Bibliography for Intro Studio and Seminar +Theses, and my Inspiration and Process

Over a few more weeks and Thanksgiving break, I was tasked with completing my Intro Studio Bridge 4 Project that would complement my Intro Seminar Research Project and Annotated Bibliography, and complete afterward my Annotated Bibliography with my FINAL thesis statement.

This post will chronicle my inspiration for my Studio and Seminar projects, respectively, my struggles, my creative process, and what I had done during Thanksgiving and afterward–leading up to now.

As I have mentioned in a previous post, my research entailed and entails the Effects of Artificial Intelligence on our Human Society– so my Intro Studio Project had to involve Artificial Intelligence in some way or form. My original ideas for the topic I was going to do research on involved Politics and political messages in Science Fiction or Science Fiction as Social Satire, but I scrapped both ideas (with an illustrated short story and everything waiting to be written and drawn) after my professor commented that the ideas I had were too broad and that I should focus on something else or narrow down my topic of research (as I could only write so much in my research paper). My idea for Artificial Intelligence came about randomly a little bit before Thanksgiving Break as I was thinking and researching about themes and topics in the science fiction genre. I realized that AI is now in this day and age becoming a more and more relevant topic in not just science fiction, but in the scientific community and among even businesses and corporations, who predict that with our rise in technology, robots will most likely take our jobs, and, if they become intelligent enough, radically change our human society for the better or for worse. Many movies and novels and short stories alike show us worlds were robots and AI rebel against their creators– the first proto-version of this concept of AI rebellion originating in Mary Shelley’s Frankenstein, and, going back even further, originating from the Greek myths of Galatea, Talos, and other automata or inorganic things given life. The creators of the AI (or robots) that rebel against them are, more often than not, nearly or completely wiped out as a result of the Artificial Super-intelligence finding its creators, usually humans, to be inferior, and unnecessary towards its prime directives. The Terminator, The Matrix, I, Robot, 2001: A Space Odyssey, Metropolis, and other films all present scenarios in which this happens. However, the introduction of an AI in our society will allow us to pursue other disciplines such as art and literature in a world were low-level jobs are handed to robots, and even some higher level blue and white-collar/ middle-class jobs too, and provide to us innumerable benefits and opportunities for rapid advancement of our race towards “cyberimmortality”, infinite or near infinite brain capacity and intelligence, among other things (should we choose to fuse with AI or cooperate with them across the stars). The danger of AI still causes some fear in scientists and civilians alike, who predict that we are anywhere from a century or so away to mere DECADES from developing an AI intelligent enough to either bring us into a new age of humanity not unlike those presented in Star Wars and Star Trek— teeming with advancement, cooperation with robots, and exploration, or bring us into peril and despair as that same AI rebels against us and grows in intelligence at an exponential rate as it keeps improving itself and absorbing knowledge about its environment (something the famous sci-fi author Vernor Vinge calls “The Singularity”), eventually superseding humanity and rendering us obsolete.

Will all of this in mind, I did more research than I already had on AI and its different types and effects. Being tasked to find 10 sources that would support my claims in my research paper, and help towards my Intro Studio project was no easy feat– I spent hours and days on the New School Archives website trying to find credible and reliable sources of information, and in the meantime, recreating my Intro Studio Project using Adobe Illustrator and Photoshop, and compiling my sources for my Annotated Bibliography for my FINAL Thesis Statement.

My original idea for the Intro Studio Project consisted of creating three separate movie posters for three separate “hypothetical movies” all dealing with the topic of AI in different scenarios– all based on three short story ideas I’ve been keeping for months on end (with no time to write them, unfortunately). I changed my idea a few days later to three book covers for my three short stories, which I ambitiously thought I could write, compile into a book format (5.5 x 8.5 for a standard novel), and print out with their covers included by the end of Thanksgiving Break and by the next critique. I had to learn how to use Adobe InDesign efficiently in the span of a few days so I could work on my book covers in Adobe Illustrator, touch them up and make them stand out on Photoshop, paste them onto InDesign and adjust them to fit the cover size, and print them out. I had already finished with two for the most part on the last Intro Studio class I had before Thanksgiving, but still had to revise one of my book covers for my first short story: “Voices of the Golems”– which tells of an alien robot species that became extinct as a result of familiar human problems in our world, such as war, greed, power-lust, arrogance, and ignorance, leaving only one of their kind, who tells the reader the history of his species. I had tried to go for a minimalist approach to my original book cover for that first story, but my Studio professor did not like some elements of the design, and thought it looked too vintage as opposed to modern– mostly due to my choice of typography (she understood my intention, however). And so, I went about redoing my first cover and working on the latter two covers for each of my stories.

Over Thanksgiving break, as I have mentioned, I worked on all three of my covers for my three short stories, but, due, to various vexing circumstances and events that impeded my progress (oh I don’t know, maybe the fact that it was Thanksgiving, meaning I had moral obligations and desires to visit my cousins and relatives to sit down and eat turkey among other delectable dishes), I realized I would not be able to finish all three of my stories in time for the critique the Wednesday after break. I asked my professor what I should do, considering I had done all three of my covers and written almost to completion one of my stories (which in 5.5 x 8.5 format totaled about 90 pages… or 42-27 depending on the size of my lettering among other factors). She told me to focus on that one story and bring it to its potential rather than present my three stories almost completed, and also, to present my story in a new way rather than just monologuing, suggesting that I provide excerpts of my story to everyone in the class, send them the entire text beforehand, among other things. This was so I could engage the class more with my project. And so, I took her advice, and printed our major excerpts from my story that I thought were important along with my book covers. After hours upon hours of trying to get my home printer to print 5.5 x 8.5 format double-sided (which brought its own sets of problems that made me waste ink and paper for no reason– stupid me >:(, and some), I was not able to print out my ONE story that way I wanted it to be printed, and was forced to go the school library (the UC library) and print out the story there in 8.5 x 11 regular format (which I needed help with because the accursed document kept coming out in weird formats that I could not fathom as to their origin). In the end, on the day of the critique, I presented my one story, its book cover, and the two other book covers for my two other stories that I could not write, with modest reception (a few people complaining about the vintage style of my new cover, and that of the two other covers, despite my minimalist mindset).

At the end of it all, my Intro Studio teacher told me she was going to grade me solely on the basis of my story’s content, the fact that I handed everyone an excerpt of my story before class started, among other things, but not on my formatting and my book cover(s). She said she was willing to help me perfect my lackluster and, frankly, inexperienced bookbinding skills (which made me worried as to my future as a possible author– this is why you leave book designing to the professionals!) by bringing in next class after the day of the critique (December 7) a bookbinding kit that I could then use to my advantage to re-create my book project for the final Bridge project (Bridge 5), which would be a sort of mind map project that would chronicle my past projects and assignments throughout the weeks and allow me to present what I learned during my time in my Intro Studio class.

Regarding my Intro Seminar Annotated Bibliography and Final Thesis Statement, I revised and alphabetized all of my sources and rewrote my thesis statement (which in a previous post, was initially in draft form) to be more concise and specific as to what exactly AI could do to our society. You already know my draft thesis, and so I will include below with everything else the progression of my thesis.

Above and below I have included everything regarding my Bridge 4 Project for Intro Studio– my book covers AND the short story I wrote– “Voices of the Golems”, as well as my final Annotated Bibliography with my final thesis statement, which I revised after Thanksgiving break for the Friday afterward (next class), and any drafts that were composed along the way.

UPDATE: This week I was told by my Intro Studio professor that she wanted me to redo my newest book cover for my short story because it still seemed too vintage and not contemporary. I had been looking a minimalist style book covers for a while since that point and didn’t really “get” how I could incorporate everything in my short story into simple shapes and colors. I pushed through and tried to simplify elements in my short story into their most basic forms– key points in the story that should be brought to a reader’s attention as they serve as important plot devices and more. The three book covers BELOW my New, Revised one will be titled MINIMALIST book covers for the sake of distinguishing between covers I did recently and covers I did a few days or weeks ago. When I showed my Intro Studio professor my most recent minimalist iterations for my book cover for my short story yesterday on December 7 through email, she congratulated me and told me that “I totally get it now” and that she loved my second minimalist book cover more than my first. Minimalism has become one of my new favorite art forms because of its simplicity and, most importantly, SUBTLETY. It express core messages within books or productions through simple abstractions and shapes without needing so much clutter and complexity (though this does NOT imply or mean to say that the subjects and topics discussed in the book are not complex– minimalist covers work so effectively because they grab peoples’ attention with color and shapes, whether it deals with books, movies, etc, and pulls them in even more with hidden meaning and subtlety (which is what art is all about).

People may think Minimalism is easy and simple and something a child could do (and I admit I once had this view of the art style), but when one gets immersed in the style, it is actually extremely difficult to express something as complex as a movie or a novel (which brims with ideas and concepts from all over, and usually deep characters) in the simplest ways possible (through lines, colors, and shapes). I am proud of finally understanding minimalism because it has allowed me to create beautiful but attention grabbing book covers like the two below what was once my newest.


Voices of the Golems (Original Book Cover)

Voices of the Golems (Original Book Cover)

Voices of the Golems (New, Revised Cover)

Voices of the Golems (New, Revised Cover)

1st Minimalist Voices of the Golems Book Cover

1st Minimalist Voices of the Golems Book Cover

Minimalist Book Cover 2-- Voices of the Golems

Minimalist Book Cover 2– Voices of the Golems

Minimalist Book Cover 3-- Voices of the Golems

Minimalist Book Cover 3– Voices of the Golems

Lycurgus 2nd Story Book Cover (Original)

Lycurgus 2nd Story Book Cover (Original)

Lycurgus 2nd Book Cover (New, Revised)

Lycurgus 2nd Story Book Cover (New, Revised)

A Mad Rover on a Sister Planet 3rd Book Cover (Original)

A Mad Rover on a Sister Planet 3rd Book Cover (Original)

A Mad Rover on a Sister Planet 3rd Book Cover (New, Revised)

A Mad Rover on a Sister Planet 3rd Book Cover (New, Revised)



Voices of the Golems FINAL




Veronica Padilla

Charles Ta

Intro Seminar 1: Avatar

December 4, 2016

CHOSEN THEME: Science Fiction

CHOSEN TOPIC OF FINAL PAPER: Artificial Intelligence– specifically, the social effects and dynamics of AI when it is introduced into human society (what changes in human society and what remains the same– what possibilities exist in the future).

THESIS STATEMENT FINAL: The introduction of an Artificial superintelligence into our human society will lead our species to complete extinction if we fail to control its rapid growth in intelligence– which may cause it to gain self awareness and decide we are simply unnecessary towards its programmed goals and directives.


Bailey, Ronald. 2014. “Will Superintelligent Machines Destroy Humanity?.” Reason. 46, Academic Search Complete, 20-23.

Plain and simple, Bailey’s article discusses the pitfalls of AI and its negative prospects on the human race– how it will ruin humans more so than help them using Frank Herbert’s Dune series, and, once again, Nick Bostrom’s book on superintelligence (the second chapter being one of my sources, if you recall) as two sources to support his claims on the necessity for humans to ban and suppress the construction of self-aware thinking machines (to prevent an uprising and mass chaos). Bailey consistently lets his readers know the absolute importance the matter of AI holds in our society today, and reinforces Bostrom’s ideas of the need for we as humans to maintain control over a superintelligence that could grow exponentially smarter and more powerful than us in a matter of hours and days– danger is imminent, and the rise of AI might be coming within the next decade or century according to predictions and beliefs by many scientists! Bailey also discusses our theories and struggles regarding the feasibility and practicality of building an AI with human-level intelligence, and referring to Bostrom’s book, discusses also the many ways by which humans can achieve hyper-intelligence and live as AI through the synthesis and copying of one’s mind and intellect to some other place. Bostrom warns us that we only have one chance to tell the AI we plan to build what to do– we have to make its directive as clear and non-threatening to the human race as possible– because if we carelessly turn our AI on and we give it the wrong commands, there is nothing we can do at that point but watch around us the imminent takeover over the world by a computer. Bostrom argues, as said by Bailey, that we should wait until the time is right for AI research to advance to a level where we have confidence that our directives will not backfire on us and advance slowly as opposed to quickly– as the changes in society and the world the minute that AI turns on, will be too drastic for us to handle now– changes that will affect us for the test of our lives– possibly forever. Bostrom ends off by advising us to program AI only in the context that will help humanity and function towards the greater good as opposed to acting on its own self-interest. Otherwise, our obliteration is imminent. Bailey’s article clearly relates to my paper because it goes over much of the same topics other sources have gone through, and explains safety protocol for properly handling an AI or superintelligence before disaster happens– something that will be useful in my paper because of the providing of solutions to the problem of AI present in our future world.

Bainbridge, William Sims. 2006. “Cyberimmortality: Science, Religion, and the Battle to Save Our Souls.” Futurist 40, Professional Development Collection, 25-29.

Bainbridge’s article on the Futurist bimonthly magazine on cyber-immortality discusses the possibility of us being able to transfer our consciousness into a digital world, or an artificial body, and thereby extend our lives so we may be able to live forever. Also discussed are the effects the theories and research being conducted by scientists on this matter has had on today’s religious and spiritual communities, who think the idea of consciousness transfer and an everlasting life infringes upon sacred ideas or philosophies about the “soul” and “God” and is thus unnatural. Bainbridge’s article implies the ongoing battle between science and religion as to who has the right viewpoint on the world and the Universe– a battle that has been raging for thousands of years since the times of Ancient Greece and beyond, and postulates our inevitable future as a cybernetic species and our progression from an organic one. Ultimately, Bainbridge asserts that we humans will one day (maybe sooner, maybe later) become one with the machine, whether religious people favor the reality or not. This relates to the social effects of AI, and thus my research paper, because should we introduce into our society a superintelligence or otherwise enhance ourselves so we become the superintelligence, either way, the very idea of religion will be challenged and perhaps superseded by science. An AI with more intelligence and ability its human creators might decide that it is more worthy of being worshipped than some invisible deity, and impose upon the human race the mandate that it be worshipped and revered amidst its rapid conquest of the world, and maybe beyond, thereby creating a new “religion” that will destroy the old one. We as enhanced beings, by becoming immortal and one with intelligence itself, will become the gods of old, which will elevate our human society from the Earth to even past the mountains of Olympus and the heavens. Cyberimmortality and AI are demonstrated once more, to be double-edged sword scenarios– both will either destroy us, or help us advance.

Bostrom, Nick. 2013. “Paths to Superintelligence.” In Superintelligence: Paths, Dangers, Strategies. Oxford: OUP Oxford, 2013. eBook Collection, 22-51.

Bostrom’s book on Superintelligence has been mentioned numerous times in the other sources I have stated here, so it is not surprising that I went looking for the textbook, as it is a premier source of information on the nature of AI and superintelligence itself. The second chapter of Bostrom’s textbook, “Paths to Superintelligence”, in summary, discusses various types of AI and each AI type’s respective constituents and requirements, and the various ways for us humans to achieve superintelligence whether through biological and genetic enhancements, prosthetic or mechanical enhancements, or the literal uploading of our consciousness to a digital world within a machine. Bostrom’s chapter also discusses the risks and uncertainties associated with developing different types of AI. Its valuable information and sources of knowledge regarding AI and its various kinds provide yet more overwhelming evidence for the social effects of a superintelligence. Depending on what path we take in terms of AI development, we will either be stronger and more intelligent than ever before through biological and genetic enhancements, or become more mechanical through prosthetics or our union with a machine– both scenarios affecting human society profoundly because of the elimination of “birth” and “death” and our scientific advancement as a species.

Chen, Angela. 2014. “One Step Ahead of the Robots.” Chronicle of Higher Education, September 26. B10-B12. Academic Search Complete.

Angela Chen’s article asks the question of whether an artificial superintelligence could pose a threat to humans, but, in contrast to other articles on the topic, uniquely places more of the fault of the AI going rogue and taking over on its own lack of common sense and strict adherence to what it has been programmed to do in relation to its function (with a bit of bad programming added into the mix), rather than on malicious human motives. Chen argues that we as humans have a fundamental misconception of what AI actually is and a misunderstanding as to how it actually works, citing Nick Bostrom’s famous paper-clip experiment involving an AI who has been programmed to perform the seemingly harmless task of “collecting paperclips” only for it to take over the world amidst trying to achieve its goal of collecting paperclips as a thought experiment that illustrates our flawed thought process regarding AI. She asserts that humanity will destroy itself through AI long before any natural disaster even comes close to wiping us out, and that we are in danger. For this reason, Chen goes into great detail talking about currently developing “existential risk organizations” whose purposes have been thus far to educate the masses of the long term dangers that might afflict us in the future as a species– AI being of high priority, and to advise these very masses to think of the pitfalls of our scientific advancements in technology and other fields— the negative effects they could bring to society if not handled carefully (especially in the case of an artificial intelligence). Chen lists in her article several possible ways in which we can prevent as humans rise of a harmful AI, such as developing a “friendly AI” that is more human as opposed to mechanical, and, continuing with the existential risk organizations, details the obstacles groups like these face, such as funding and credibility from the scientific community. Chen’s assertion of an AI being dangerous because it lacks common sense, and because it will do anything in its power– even destroy us– to achieve its desired directive directly relates to the social effects and dynamics of AI because its very existence, as explained many times will be a threat to humanity if it grows intelligent enough and takes fulfilling its programmed purpose to the extreme. What awaits us is our own destruction, and so it is important for us to think about what we want to use AI for and how much autonomy we should provide it. In any case, there will be profound changes to our society and way of life regardless. Chen provides more evidence to the obvious conclusion of the dangers of AI.

De Witt, Douglas Kilgore. “Difference Engine: Aliens, Robots, and Other Racial Matters in the History of Science Fiction.” Science Fiction Studies 37.

De Witt’s article on various aspects of science fiction comments on how the science fiction genre as a whole in the past (think of old sci-fi from the late 1800s to the mid-twentieth century during the Cold War)– its often musty, academic style, specific plot elements, characterizations of characters within stories and novels alike, and more, was at one point associated with gender-based and racial “norms” common to Europe and America, due to often being written by white, and American or British, male, authors. De Witt criticizes the homogenous nature of the stories of the past, and, in his article, praises the science fiction of today for being more multicultural and expansive– in the sense that more and more writers today in the genre are foreigners and coming from Asia, Latin America, and/or Africa. Science fiction, in today’s times, thus is more varied, allowing for more creative freedom among writers and thinkers, and less constricted. Today’s works in sci-fi are also less biased and less “racist” or “political” because more writers from more backgrounds have been able to imprint their cultures, experiences, and beliefs into a greater variety of works (as opposed to vintage sci fi, which often had racist undertones, especially in regards to “aliens”– and was extremely biased towards the Russians). Though it may seem unlikely that this article relates to my research on the “social effects” and changes following the introduction of AI into our society, it does because of the massive changes that will occur within our social structure and within our own perception as a species once AI come around. Racism and social “norms” will fade away at the dawn of a more technological world, and in the case of prosthetics and other enhancements to ourselves, we will cease to see each other as different through race and political views and more as equals– as cybernetic beings free from prejudice, weakness, disease, and ignorance. Genetic manipulation will also allow us to synthesize new children and “customize” them based on our preferences and allow humanity to become even more various as a species, including now not only people of “different races” but also of different origins– biological or artificial.

Macdonald, Kate. 2016. “The First Cyborg.” History Today 66, 31-36. History Reference Center, EBSCOhost

McDonald’s article reviews and discusses the introduction of the first modern cyborg– Soldier 241, in the obscure and not so well-known 1917 play Blood and Iron, which preludes the coinage of the word “cyborg” by several decades. The author goes over the implications of Soldier 241’s actions during the one-act play after he is created by a character known as The Scientist, presented to an unknown “Emperor” so that he can be utilized as an unstoppable military weapon, and handed over to the command of an officer whom he later rebels and kills to prevent more war between the Emperor’s country and his enemies. The author also discusses the play’s anti-war message (relating this message to the sentiments of British people at the time it was debuted due to their desire for WWI to end) and paints Soldier 241 as a defender of the peace– a good AI or artificial life form that ultimately acts more human than an actual human, but is forced to make difficult, moral choices that benefit only one entity as opposed to everyone. McDonald’s article helps provide in my research paper supporting evidence for one possibility or “social effect” of the introduction of AI or sentient “robots” in our society because it is highly likely that, once they rise, they could be used by our leaders for military or diplomatic purposes. However, it is also highly likely that, depending on how we program our mechanical companions, the same machines we built to defend us might end up rebelling against us like Soldier 241 did “for the greater good”. Similar to our Soldier, robots of the future will have to make increasingly complex choices and decisions that may not always be morally sound through their basis in reason as opposed to emotion. In any case, we must be cautious.

Wallach, Wendell. 2016. “THE SINGULARITY: WILL WE SURVIVE OUR        TECHNOLOGY.” Jurimetrics: The Journal Of Law, Science & Technology 56, Academic Search Complete, 297-304.

Wendell’s article takes inspiration from Vernor Vinge’s popularized idea of the “Singularity”–a term that describes a point in future human history in which artificial intelligence reaches and then rapidly begins to exceed the capacity and potential for human intelligence (an intelligence “explosion”)– a time when “science and science fiction, religion and philosophy, and hope and fear converge”, and relates it to his review of a 2012 documentary film by Doug Wolens that matches to the name of the article (Singularity: Will we Survive our Technology). The film, according to Wendell, addresses the opinions and thoughts of the world’s greatest thinkers on the coming Singularity and whether pushing our advances in science and technology in order to pave the way for AI or, alternatively, enhancements to our bodies, minds, and even genetic code in order to make us more intelligent, stronger, and more well suited to our environment, will be a good thing that helps us, or something terrible than ends up destroying us. The different ways by which our abilities and bodies may be enhanced through AI, drugs, prosthetics, gene manipulation, etc. are explained. Wendell, in addition, also traces the origins of the idea of the “Technological Singularity” to a 1965 article by I.J. Good, a member of Alan Turing’s team that broke the codes being used by the Nazis, and cites debate among thinkers in the documentary as to whether the Singularity is even possible considering how our computers today are reaching the limits of Moore’s Law and the obstacles associated with actually building an AI that emulates a human, conscious brain. The author ends off on a positive note praising Wolens’ film for its entertainment value, its relevance and ability to educate audiences about our uncertain future, and its intention of warning us of the threat that a superintelligence could impose on humanity through the words of scientists and other reputable people and information about modern developments in genetics, nanotechnology, and AI which may prelude our salvation…or our destruction. Wendell’s review helps me in my research paper because it provides a broad spectrum of evidence that suggests that, in relation the social effects of an AI (should it ever be introduced into society), we as a species are in serious danger. Wendell’s film review also will enhance the credibility of my paper because he himself cites direct quotes from people that were interviewed as part of Wolens’ film.

Vinge, Vernor. 1993. “Technological Singularity.” Article originally presented at the VISION-21 Symposium sponsored by the NASA Lewis Research Center and the Ohio Aerospace Institute, March 30-31.

Vernor Vinge’s famous essay on his idea of the “Technological Singularity” discusses in detail the phenomenon by which an AI sufficiently intelligent enough can advance and improve itself extremely rapidly to the point that it completely overtakes human society and completely dominates the world– an “intelligence explosion” that can destroy us. In terms of benefits to the human race, if we enhance our bodies or move beyond the need for physical bodies, the “intelligence explosion” amongst us humans will allow us to advance at an extremely fast pace and become almost supernatural beings, in a sense, which will help us the long run. The Singularity by its very nature allows an intelligent machine to produce still more intelligent beings in a constant cycle that cannot be broken– and likewise, the Singularity will allow us to improve ourselves in all aspects without cessation. Vinge goes to say in his essay that the idea of a “Technological Singularity” has profoundly affected the way science fiction writers have been writing their stories and their predictions of the future. Rather than imagining humans and the empires millions of years from now, the threat an AI poses to human society crosses out the possibility of us probably never achieving anything beyond dominance over Earth (thus sci-fi writers, being aware of the Singularity, have over the years become more pessimistic as opposed to optimistic and visionary). In addition, Vinge predicts that the “Singularity” will happen because of the advantages they hold for humanity in the form of advancement, despite the dear costs if things go wrong, and that the event is inevitable just as the possibility of our race going extinct. He also lists the multiple ways by which humans can achieve ultra intelligence, or the “Singularity”–some ways much safer than others while still yielding maximum benefit for humans and preventing their extinction (such as IA–”Intelligence Amplification”, which consists of an interface and humans working together to solve problems, rather than one superseding the other). Vinge concludes his essay be asserting that the mind and the self will be much different during the Singularity in comparison to now– for the ego as well as the essence of humanity will not only have the ability to be uploaded to a machine, but also copied. We will cease to be organically human. Clearly, Vinge’s essay helps my research paper on the social effects of AI in our society because it addresses several aspects of ourselves that would change through the introduction of an ultra-intelligence. Like all my other sources show, we are either doomed as a species under AI, or saved and led towards a better place or a new state of living. The Singularity is upon us and is closer than ever.

Zarkadakis, George. 2013. “Love Machines: The Lure of AI is Erotic as Much as it is Rational.” Aeon, March 26.

Zarkadakis’s essay on AI and superintelligence focuses on the motive we as humans have for building an AI system, asking questions as to whether we are building a conscious machine out of a need for companionship, as a way for us to express our own narcissism, out of fear of being alone, or perhaps for “love” and curiosity’s sake. His essay veers towards the notion that AI units may just become our future companions and erotic lovers in times of need, and cites the Turing test as a way to test the intelligence of robot in its ability to deceive a human being, relating it to Turing’s own closeted homosexuality. Zarkadakis, springboarding off of Turing’s internal battles over eroticism, also cites numerous myths and legends of “mechanical lovers” like that myth of Galatea and Pygmalion and relates it to movies like Metropolis and TV serializations like Star Trek: The Next Generation and their modern “Galateas” in order to solidify the role of robots and AI as the lovers of the future– our source of erotic pleasure, though he is not hesitant to warn his readers of the possibility of an AI takeover and thus an AI betrayal (demonstrated through movies like The Matrix and The Terminator). Zarkadakis advises that we as a species create robots that shall never betray us and always remain faithful to us in their love even in the darkest of times so that they serve as our helpers and lovers rather than our enemies and destroyers, tying the idea of a “loving” AI to Spielberg’s movie A.I. whose protagonist– a mechanical boy who wishes to be human, is given his formerly dead mother for a day, and goes to the place of dreams where his kind were conceived. We humans have dreamed of creating robots out of emotional necessity and a need for love and companionship– and have expressed this through myth and modern retelling– thus, in relation to my research paper about AI social’s effects, we may come to accept our mechanical creations as our friends and partners– possibly even our lovers in the near or far future. Robots will not only change the very way we look at ourselves, should intelligent variants appear amongst us, but also force us to fundamentally reassess human relations and what we can constitute as acceptable in terms of relations with each other and robots. Zarkadakis implies in his essay that if robots are to become our lovers and objects of affection, serious laws will have to be put in place to ensure relationship abuses do not occur– and if religion persists, the spiritual and sacred tradition of the union and marriage between man and woman (or man and man, or woman and woman) will definitely be threatened and called into question amidst possible disapproval by religious figures of man and his own machine uniting under “love” due to their perception of machines as “perverse creations of the flesh” (homosexuality was controversial in the time of Turing and it still causes problems today in the religious community– in the future, the same will occur with men and machines).

      Zarkadakis, George. 2016. In Our Own Image: Savior or Destroyer? The History and Future of Artificial Intelligence. New York: Pegasus Books.

Probably the most comprehensive source I have on this list on the nature, history, and effects of artificial intelligence and how its concepts and manifestations have evolved over the years, Zarkadakis’s newest novel, in brief, chronicles the history of the ideas, philosophies, and conceptions that first gave rise to the notion of AI, talks about AI as represented in the past and in modern media and pop culture, and predicts its nature and actions in the future, when it inevitably becomes a reality. From what I have read so far, Zarkadakis’s novel leaves no stone unturned and probes into every aspect of the philosophical, scientific, and sometimes spiritual ideas that helped pave the way for our ideas of the golems, brazen heads, or automata of the times of the Greeks and beyond. His novel talks about Cartesian dualism that separates the body and the mind from each other, which serves as a backdrop for the topic of whether it would be possible for us to upload our consciousness to a digital world and leave our mortal bodies to rot, the possibility of us living in a simulation– which accompanies the idea of our minds and the Universe around us functioning as software systems and computers, the feasibility and practicality of us building an AI taking into account the complexity of the human brain (let alone building something that supersedes it), the eroticism of AI (also talked about in his article), and the religious nature of our ascending into the status of gods or avatars should we enhance ourselves to the point where we reach our maximum potential and become capable of bending the Universe to our will. Zarkadakis supports his topics with research and examples from current and past movies in order to enforce his cautionary messages regarding AI while providing us with a relative optimistic future for the human race if we are careful with how we implement AI and/or our enhancements into society. Zarkadis’s book connects to all my other sources on some level, and thus relates to my research paper because it gives a history of AI and predicts the effects it will have on human society should its introduction go underway– positive or negative. Like I have said before, AI or any form of superintelligence or enhancements to ourselves will radically change our self-perception and benefit society for the better or destroy us in the end depending on how we program our future companions and if we choose to give it “consciousness”– a term Zarkadakis grapples with throughout his book (even challenging on both sides the notions of whether building an AI is possible as well as impossible).


First Thesis Statement (Original):

The introduction of an Artificial Intelligence or AI will no doubt bring profound changes to our society as a whole, have far-reaching effects on our conception of humanity, our stance as a species, our government and legal systems, our markets and our global economy, and our technological and scientific progress, and inevitably either bring peace, prosperity, and progress to the human race, or lead it to its total destruction.

Second Thesis Statement (Revised, and more concise):

Depending on how careful we are, the introduction of 

an Artificial Intelligence into our human society will lead us as a species to either become one with machine, or to our complete extinction. 

(Still too general, and thus not specific enough).

Third and FINAL Thesis Statement (more specific, focusing on the AI takeover and apocalypse scenario):

The introduction of an Artificial superintelligence into our human society will lead our species to complete extinction if we fail to control its rapid growth in intelligence– which may cause it to gain self-awareness and decide we are simply unnecessary towards its programmed goals and directives.


Leave a reply

Skip to toolbar