amor mundi

Using Technology to Deepen Democracy, Using Democracy to Ensure Technology Benefits Us All

Saturday, January 24, 2015

Don't Just Hope, Support Hope

That a guaranteed income would end hopeless poverty is a reason to support it, that it would begin endless hopeful enrichments is another.

More Dispatches from Libertopia here.

MundiMuster! Save Urban Studies at SFAI: Sign and Share the Letter

Since 2012, many concerns have been raised by faculty, students, staff, alumni, and supporters about a lack of transparency and accountability in the key decision making processes undertaken by the administration of the San Francisco Art Institute. The quite sudden recent suspension of the Urban Studies program seems to be yet another stunning confirmation of these worries.

To say that this decision has been more opaque than transparent is an understatement: Few of the actual stakeholders to such a momentous change at SFAI were aware that it was even under consideration until a public e-mail announced that the decision and the process justifying it had already concluded.

A review of the program is said to have taken place in the Spring of 2014, but few faculty or students involved in the actual program seem to have participated in either the review or the decision-making process. The timing of this review coincided with the absence of key figures in and champions of the program, and the results of the process are at odds with the results of other recent and ongoing review processes that involved wider participation of the relevant stakeholders.

While program changes ultimately depend on the Dean, they are supposed to be addressed in the Faculty Senate. The Urban Studies program was not suspended after a discussion or with a consenting vote of the Faculty Senate.

Dean Schreiber recently commented that Urban Studies does not offer a “robust experience” for students. In the absence of a definition of terms this judgment is hard to gauge, but it is difficult to understand how the accompanying proposal of a BFA with an urban focus more centered on studio classes than interdisciplinary courses could possibly be more robust on any definition. Re-assigning our institutional engagement with urban questions to studio classes will inevitably introduce fissures in the theoretical formulation of the urbanity in question. This envisioned change is also countering the current trend of expanding Urban Studies Master’s programs across the country’s academic institutions. 

Indeed, a host of courses directly taking up historical and theoretical questions of post-colonial pan-urbanity, environmental sustainability, urban poverty, street protests, immigrant communities―many of them focusing on urban movements in San Francisco in particular and taught for years by activists and participants in the movements themselves―such as Laura Fantone, and the renowned local historian Chris Carlsson―are vanishing from the 2015 curriculum. The loss of these engagements and the silence of these voices wounds our Institute. Far from a minor shift in focus these changes can only be understood as a radical dis-engagement with the urban as a real priority at SFAI.

As recently as SFAI’s Strategic Plan for 2013, the President and the Board of Trustees declared that “SFAI will strive to further improve its operations and heighten its ambitions in the service of art, artists, and the Bay Area community.” The decision to suspend Urban Studies contradicts SFAI’s long-held commitment to connect our students to the City and the greater Bay Area artistic community of which we are a part.

The San Francisco Art Institute is a school sited at the thriving heart of a world-historical city. We live and teach and create and connect in the midst of the urgent distress of artist and gallery evictions, in the scrum of venture capitalism’s “disruptions” of public goods and public services, in the face of the Silicon Valley steamroller of reductive tech-talk, in the creative ferment of street protest, all right here, all right now. In such a place and at such a time, at the very moment when other art schools and art programs are taking up the urban with renewed energy and vigor as an indispensable motor of convivial creativity and transformative imagination, it is difficult indeed to understand what considerations have driven this rash decision to suspend an already established and accomplished program here.

Please sign the letter here. And please share this widely.

Friday, January 23, 2015

Long Teaching Day

Teaching term has begun again, and I have two lectures on Fridays, first my undergraduate seminar on digital democracy and anti-democracy, from roughly nine to noon, then my graduate survey of critical theory from roughly one to four. I'm lecturing on my feet the full three hours for each, without much in the way of discussion actually, so today is more or less back to back openings for what will always be a long double bill. Flanked by bus and train commutes right around rush hour it's looking like a long slog. Once upon a time I would begin my courses with a gentle overview of the syllabus and policies and an early leave-taking, but since I just have one bite of the apple each week these opening look to be leaps right into the deep end of the pool, maps of conceptual and argumentative terrain that will take up the whole time. This morning, technology and democracy as sites of contestation, and the traffic between material and immaterial in tech-talk, this afternoon, critical theory as post-philosophy, social, cultural, exegetical traditions, fact, figure, fetish, etymological fantasias, much more. So, you should expect blogging to be low to no today and maybe tomorrow if I'm still in recovery/hungover.

Soap Bubbles

After Battlestar Galactica, the most successful TV re-boot has to be Mary Hartman, Mary Hartman as The Walking Dead.

Thursday, January 22, 2015

Syllabus for my Digital Democracy, Digital Anti-Democracy Course (Starting Tomorrow)

Digital Democracy, Digital Anti-Democracy (CS-301G-01)

Spring 2015 01/23/2015-05/08/2015 Lecture Friday 09:00AM - 11:45AM, Main Campus Building, Room MCR

Instructor: Dale Carrico; Contact: dcarrico@sfai.edu, ndaleca@gmail.com
Blog: http://digitaldemocracydigitalantdemocracy.blogspot.com/

Grade Roughly Based On: Att/Part 15%, Reading Notebook 25%, Reading 10%, In-Class Report 10%, Final Keywords Map 40%

Course Description:

This course will try to make sense of the impacts of technological change on public life. We will focus our attention on the ongoing transformation of the public sphere from mass-mediated into peer-to-peer networked. Cyberspace isn't a spirit realm. It belches coal smoke. It is accessed on landfill-destined toxic devices made by wretched wage slaves. It has abetted financial fraud and theft around the world. All too often, its purported "openness" and "freedom" have turned out to be personalized marketing harassment, panoptic surveillance, zero comments, and heat signatures for drone targeting software. We will study the history of modern media formations and transformations, considering the role of media critique from the perspective of several different social struggles in the last era of broadcast media, before fixing our attention on the claims being made by media theorists, digital humanities scholars, and activists in our own technoscientific moment.

Provisional Schedule of Meetings

Week One, January 23: What Are We Talking About When We Talk About "Technology" and "Democracy"?

Week Two, January 30: Digital,

Laurie Anderson: The Language of the Future
Martin Heidegger, The Question Concerning Technology 
Evgeny Morozov, The Perils of Perfectionism
Paul D. Miller (DJ Spooky), Material Memories 
POST READING ONLINE BEFORE CLASS MEETING

Week Three, February 6: The Architecture of Cyberspatial Politics

Lawrence Lessig, The Future of Ideas, Chapter Three: Commons on the Wires
Yochai Benkler, Wealth of Networks, Chapter 12: Conclusion
Michel Bauwens, The Political Economy of Peer Production
Saskia Sassen, Interactions of the Technical and the Social: Digital Formations of the Powerful and the Powerless 
My own, p2p Is Either Pay-to-Peer or Peers-to-Precarity 
Jessica Goodman The Digital Divide Is Still Leaving Americans Behind 
American Civil Liberties Union, What Is Net Neutrality
Dan Bobkoff, Is Net Neutrality the Real Issue?

Week Four, February 13: Published Public

Dan Gillmour, We the Media, Chapter One: From Tom Paine to Blogs and Beyond
Digby (Heather Parton) The Netroots Revolution
Clay Shirky, Blogs and the Mass Amateurization of Publishing
Aaron Bady, Julian Assange and the Conspiracy to "Destroy the Invisible Government"
Geert Lovink Blogging: The Nihilist Impulse

Week Five, February 20: Immaterialism

John Perry Barlow, A Declaration of the Independence of Cyberspace
Katherine Hayles, Liberal Subjectivity Imperiled: Norbert Weiner and Cybernetic Anxiety
Paulina Borsook, Cyberselfish
David Golumbia, Cyberlibertarians' Digital Deletion of the Left
Richard Barbrook and Andy Cameron, California Ideology
Eric Hughes, A Cypherpunk's Manifesto
Tim May, The Cryptoanarchist Manifest

Week Six, February 27: The Architecture of Cyberspatial Politics: Loose Data

Lawrence Lessig, Prefaces to the first and second editions of Code
Evgeny Morozov, Connecting the Dots, Missing the Story
Lawrence Joseph Interviews Frank Pasquale about The Black Box Society
My Own, The Inevitable Cruelty of Algorithmic Mediation
Frank Pasquale, Social Science in an Era of Corporate Big Data
danah boyd and Kate Crawford, Critical Questions for Big Data Bruce Sterling, Maneki Neko

Week Seven, March 6: Techno Priesthood

Evgeny Morozov, The Meme Hustler
Jedediah Purdy, God of the Digirati
Jaron Lanier, First Church of Robotics
Jalees Rehman, Is Internet-Centrism A Religion?
Mike Bulajewski, The Cult of Sharing
George Sciaballa Review of David Noble's The Religon of Technology

Week Eight, March 13: Total Digital

Jaron Lanier, One Half of a Manifesto
Vernor Vinge, Technological Singularity
Nathan Pensky, Ray Kurzweil Is Wrong: The Singularity Is Not Near
Aaron Labaree, Our Science Fiction Future: Meet the Scientists Trying to Predict the End of the World
My Own, Very Serious Robocalyptics
Marc Steigler, The Gentle Seduction

Week Nine, March 16-20: Spring Break

Week Ten, March 27: Meet Your Robot God
Screening the film, "Colossus: The Forbin Project"

Week Eleven, April 3: Publicizing Private Goods

Cory Doctorow You Can't Own Knowledge
James Boyle, The Second Enclosure Movement and the Construction of the Public Domain
David Bollier, Reclaiming the Commons
Astra Taylor, Six Questions on the People's Platform

Week Twelve, April 10: Privatizing Public Goods

Nicholas Carr, Sharecropping the Long Tail
Nicholas Carr, The Economics of Digital Sharecropping
Clay Shirky, Why Small Payments Won't Save Publishing
Scott Timberg: It's Not Just David Byrne and Radiohead: Spotify, Pandora, and How Streaming Music Kills Jazz and Classical 
Scott Timberg Interviews Dave Lowery, Here's How Pandora Is Destroying Musicians
Hamilton Nolan, Microlending Isn't All It's Cracked Up To Be

Week Thirteen, April 17: Securing Insecurity

Charles Mann, Homeland Insecurity
David Brin, Three Cheers for the Surveillance Society!
Lawrence Lessig, Insanely Destructive Devices
Glenn Greenwald, Ewan MacAskill, and Laura Poitras, Edward Snowden: The Whistleblower Behind the NSA Surveillance Revelations
Daniel Ellsberg, Edward Snowden: Saving Us from the United Stasi of America

Week Fourteen, April 24: "Hashtag Activism" I

Evgeny Morozov Texting Toward Utopia 
Hillary Crosly Croker, 2013 Was the Year of Black Twitter
Michael Arceneux, Black Twitter's 2013 All Stars
Annalee Newitz, What Happens When Scientists Study Black Twitter
Alicia Garza, A Herstory of the #BlackLivesMatter Movement
Shaquille Bewster, After Ferguson: Is "Hashtag Activism" Spurring Policy Changes?
Jamilah King, When It Comes to Sports Protests, Are T-Shirts Enough?

Week Fifteen, May 1: "Hashtag Activism" II

Paulina Borsook, The Memoirs of a Token: An Aging Berkeley Feminist Examines Wired
Zeynap Tukekci, No, Nate, Brogrammers May Not Be Macho, But That's Not All There Is To It; How French High Theory and Dr. Seuss Can Help Explain Silicon Valley's Gender Blindspots
Sasha Weiss, The Power of #YesAllWomen
Lisa Nakamura, Queer Female of Color: The Highest Difficulty Setting There Is? Gaming Rhetoric as Gender Capital 
Yoonj Kim, #NotYourAsianSidekick Is A Civil Rights Movement for Asian American Women
Jay Hathaway, What Is Gamergate

Week Sixteen, May 8: Digital Humanities, Participatory Aesthetics, and Design Culture

Claire Bishop, The Social Turn and Its Discontents
Adam Kirsch, Technology Is Taking Over English Departments: The False Promise of the Digital Humanities
David Golumbia, Digital Humanities: Two Definitions
Tara McPherson, Why Are Digital Humanities So White?
Roopika Risam, The Race for DigitalityWendy Hui Kyong Chun, The Dark Side of the Digital Humanities
Bruce Sterling, The Spime
Hal Foster, Design and Crime
FINAL PROJECT DUE IN CLASS; HAND IN NOTEBOOKS WITH FINAL PROJECT

Syllabus for my Introduction to Critical Theory (Starting Tomorrow)

CS 500A: Introduction to Critical Theory Spring, 2015, San Francisco Art Institute

Instructor: Dale Carrico, dcarrico@sfai.edu
Course Blog: http://introcritsfai.blogspot.com/
Fridays 1-3:45, Chestnut Lecture Hall

Rough Basis for Grade: Reading Notebook, 25%; Three Precises, 25%; Fifteen+ Comments, 15%; Final Paper 15-20pp. 35%.

Course Description:

"The philosophers hitherto have only interpreted the world, but the point is to change it."--Karl Marx.

This course is a chronological and thematic survey of key texts in critical and cultural theory. A skirmish in the long rivalry of philosophy and rhetoric yielded a turn in Marx, Nietzsche, and Freud into the post-philosophical discourse of critical theory. In the aftermath of world war, critical theory took a biopolitical turn in Arendt, Fanon, and Foucault -- a turn still reverberating in work on socially legible bodies by writers like Haraway, Spivak, Butler, and Gilroy. And with the rise of the neoliberal precariat and climate catastrophe, critical theory is now turning again in STS (science and technology studies) and EJC (environmental justice critique) to articulate the problems and promises of an emerging planetarity. Theories of the fetish define the turn of the three threshold figures of critical theory -- Marx, Nietzsche, and Freud (commodity, sexuality, and ressentimentality) -- and fetishisms ramify thereafter in critical accounts from Benjamin (aura), Adorno (culture industry), Barthes (myth), Debord (spectacle), Klein (logo), and Harvey (tech) to Mulvey and Hall (the sexed and raced gaze).

Provisional Schedule of Meetings

Week One | January 23
Maps, Stories, Warnings by Way of Introduction

Week Two | January 30
Bernard le Bovier de Fontenelle, Digression on the Ancients and the Moderns
Immanuel Kant Idea for a Universal History from a Cosmopolitan Point of View
W.E.B. Du Bois, Of Our Spiritual Strivings from The Souls of Black Folk
Oscar Wilde, The Soul of Man Under Socialism
Preface to The Picture of Dorian Gray
Phrases and Philosophies for the Instruction of the Young
Wilde on Trial

Week Three | February 6
Nietzsche, On Truth and the Lie in an Extramoral Sense
Selections from The Gay Science
Ecce Homo: Preface -- Why I Am So Wise -- Why I Am So Clever -- Why I Am a Destiny (or Fatality)

Week Four | February 13
Marx and Engels, Theses on Feuerbach
Marx on Idealism and Materialism
Marx on The Fetishism of Commodities and the Secret Thereof from Capital

Week Five | February 20
Excerpts from Freud's Case Study of Dr. Schreber
Sigmund Freud, Fetishism

Week Six | February 27
Walter Benjamin, Art in the Age of Mechanical Reproducibility
A Short History of Photography
Adorno and Horkheimer, The Culture Industry
Adorno, The Culture Industry Reconsidered

Week Seven | March 6
Roland Barthes, Mythologies
Raymond Williams, Culture from Keywords
Dick Hebdige, on Subculture and Style

Week Eight | March 13
Guy Debord, Society of the Spectacle
Naomi Klein, Taking On the Brand Bullies, Patriarchy Gets Funky from No Logo
Stuart Hall, The Question of Cultural Identity

Week Nine | March 16-20 | Spring Break

Week Ten | March 27
Franz Kafka, Give It Up!
Louis Althusser, Ideology and Ideological State Apparatuses
Hannah Arendt, The Gap Between Past and Future
William Burroughs on Coincidence and the Magical Universe

Week Eleven | April 3
Frantz Fanon, Selections from Black Skin, White Masks
Paul Gilroy, Race and the Right to be Human
Laura Mulvey, Visual Pleasure and Narrative Cinema
Kobena Mercer On Mapplethorp

Week Twelve | April 10
Michel Foucault, from Discipline and Punish, Introduction, Docile Bodies, Panoptism
Foucault, from History of Sexuality: We Other Victorians, Right of Death and Power Over Life
Frantz Fanon, Concerning Violence
Hannah Arendt, Reflections on Violence

Week Thirteen | April 13–17 | MFA Reviews

Week Fourteen | April 24
Judith Butler, Introduction and Chapter One from Undoing Gender
Donna Haraway, A Manifesto for Cyborgs
Valerie Solanas, The SCUM Manifesto
Carol Adams, Preface and On Beastliness and Solidarity

Week Fifteen | May 1
David Harvey Fetishism of Technology
Hannah Arendt, The Conquest of Space
CS Lewis Abolition of Man (you need only read Chapter Three)
Slavoj Zizek, Bring Me My Philips Mental Jacket!

Week Sixteen | May 8
Bruno Latour, A Plea for Earthly Science
Gayatri Spivak Theses on Planetarity

Course Objectives:

Contextualizing Contemporary Critical Theory: The inaugural Platonic repudiation of rhetoric and poetry, Vita Activa/Vita Contemplativa, Marx's last Thesis on Feuerbach, Kantian Critique, the Frankfurt School, Exegetical and Hermeneutic Traditions, Literary and Cultural Theory from the Restoration period through New Criticism, from Philosophy to Post-Philosophy: Marx, Nietzsche, Freud; the postwar biopolitical turn in Arendt, Fanon, and Foucault; and the emerging post-colonial, post-international, post-global planetarity of theory in an epoch of digital networked media formations and anthropogenic climate catastrophe.

Survey of Key Themes in Critical Theory: Agency, Alienation, Aura, Critique, Culture Industry, Discourse, Equity-in-Diversity, Fact/Value, Fetish, Figurality, Humanism/Post-Humanism, Ideology, Judgment, Neoliberalism, Post-Colonialism, Scientificity, Sociality, Spectacle, Textuality.

Survey of Key Critical Methodologies: Critique of Ideology, Marxism/Post-Marxism, Psychoanalysis, Foucauldian Discourse Analysis, Critical Race Theory, Gender Theory, Science and Technology Studies.

Connecting theoria and poiesis: thinking and acting, theory and practice, creative expressivity as aesthetic judgment and critical theory as poetic refiguration, etc.

Wednesday, January 21, 2015

Not Just A Creepy Cartoon, It's Artificial Imbecillence!

Horrifying Kickstarter Staff Pick. The mad monotonous piano scales in the background of the pitch perfectly capture the derangement produced by incessant insipid vapid algorithmic chatter. Extra points are due to the marketing jiujitsu of turning "hands-off" help into a feature. Of course it's "hands off": It's a goddamn lobotomized Disney Princess cartoon interrupting you with autocorrect suggestions wherever you go.

#futuressobrightIgottagiveshade

Microsoft's Hollow Lens hopes adding the Oculus Grift to Google Glassholery will prove two fails make a win.


Revolutionary, comfortable, and so attractive!


The Futurological Stain on the State of the Union

From the President's State of the Union address this evening:
Some of our bedrock sectors, like our auto industry, are booming. But there are also millions of Americans who work in jobs that didn't even exist ten or twenty years ago -- jobs at companies like Google, and eBay, and Tesla. So no one knows for certain which industries will generate the jobs of the future.
I'm no fan of America's ruinous and idiotic car culture -- which arose out of the postwar futurological cheerleading of "The Greatest Generation" -- but comparing the titans of Fordist manufacturing with SillyCon Valley's celebrity-CEOs and techbro VCs is patently ridiculous. It is notoriously the case that firms in the IT sector with market capitalization comparable to large retailers or manufacturing companies employ fractionally as many people than these traditional sectors do.

About those tech giants name-checked as exemplars on whom the President means to pin our jobs future? Well, Google employs between 37,000 and 52,000 people; eBay employs about 32,000 people; and Tesla motors employs about 6,000 people. That's far from the kind of stunning employment contribution these enterprises were made to symbolize in tonight's State of the Union.

According to the Bureau of Labor Statistics the Construction and Manufacturing sectors employ over 12% while the Information Sector employs under 2% of US jobs. And this is despite the recent decline in manufacturing, which has resulted from race-to-the-bottom trade policies rather than some irresistible digital destiny in any case, and hence could be reversed should our policies come to reflect fairness and sustainability priorities as they should on Obama's own terms.

It seems a bit odd, I must say, the way the speech corralled Tesla with Google and eBay, really, since elsewhere Obama's speech (in the snippet quoted above, for example) takes pains to distinguish "new" IT from "old" manufacturing. I guess it makes a difference when the auto manufacturer is making marginal publicity-hogging boutique-green electro-Edsels. All that hype just has the zing of new now next! Indeed, what all these companies actually share most of all is the techno-transcendental coloration imbued by our own generation's futurological flim-flam operators, peddling digitality and AI and cartoon-tech like Musk's LEO amusement park rides and Hyperloop stunt.

Even Obama's much-anticipated and discussed proposal to make two years of community college much more widely available was freighted with futurological framing. While I am heartened by any commitment to a real public investment in our capacity for collective problem-solving, I was disheartened again to find this proposal unexpectedly framed in the speech as a way to "train workers to fill high-paying jobs like coding... and robotics." As if coders and roboticists can overcome jobs lost to downsizing and outsourcing and financialization -- downsizing, outsourcing, and financialization indispensably enabled and abetted by, that's right, coders and automation!

And although I strongly favor the President's call for public investment in a faster and more open internet -- I must say that for one thing I am far from assured that the President's panoptic sorts comport with a sense of openness worthy of the name; and for another thing I am well aware that the reason Europe has an incomparably faster and cheaper and more reliable internet than Americans do right now has everything to do with regulations and nothing to do with "the digital innovators and entrepreneurs [who] keep reshaping our world" to whom Obama genuflected in his speech. I have a song in my heart for fact-gathering social workers and labor economists with clipboards like good Democrats are supposed to do, but the upward-failing skim-and-scam operators of the "new economy" Obama praised over and over again in the big speech tonight -- so many of whom slurp up government cash while crowing about their libertechbrotarian cyborg-individualism and hostility to Big Government -- just make me want to ralph.

Like the Clinton and Gore embrace of the irrational exuberance of the fin de siecle dot.bomb, Obama's embrace of digi- nano- AI- nonsense reveals the profound susceptibility of the partisan Democratic left to assimilating reactionary politics through uncritical "technology" discourses that rationalize corporate-military budgetary priorities and conduce to mass consumer-complacency and circumventions of democratic deliberation by self-appointed technocratic and designer elites. It is enormously important that the Democratic Party has embraced macroeconomic literacy, climate science, Darwinian evolution, public healthcare, safer sex eduction, medical research, renewable infrastructure spending, fact-based harm-reduction policy-making, and so on against the outrageous anti-intellectualism and science-denialism of today's GOP. But these Democratic commitments must be informed and not simply fetishistic.

I am a champion of real public space programs for discovery and research toward the public good -- which is why I refuse to celebrate the displacement of this vision by the Vegas dreams of for-profit space hucksters foisting low-earth orbit planes and orbital love motels on us while promising to colonize the solar system and mine the asteroids in an imperial gold-rush get-rich-quick future re-run of manifest destiny. I am a champion of real public investment in renewable, resilient energy, communication, and transportation infrastructure and of real investment in medical research and access -- which is why I refuse to celebrate the displacement of this vision by greenwashing geo-engineers or hucksters of enhancement and longevity moonshine for superannuated boy-band Boomers.

Democrats have to take care not to fall for pseudo-science nor for reactionary policies with a "tech" patina: like MOOCifying education "reformers," like budget hawks who pretend miracle medicine justifies raising the retirement age, like suave Big Data miners and masseurs treated more and more like wizards in electoral and marketing campaigns (which are becoming less and less distinguishable) at the risk of substance, like drone cheerleaders who want to make illegal war and assassinations on the cheap while we sleep, like venture-capitalist "disruptors" peddling the usual right-wing de-regulation, looting of common goods, and valentines to makers-vs-takers wealth-concentration.

Look, I enjoyed the President's attitude and ad libs as much as the next guy. There were edifying passages on fairness and sustainability and diplomacy (most of them contradicted at other points in the speech not to mention by reality). It wasn't a terrible speech, and it had the benefit of being pretty forgettable. As an opening move in the long campaign to put Hillary Clinton with an Elizabeth Warren inflection into the White House the speech wasn't half bad. But as somebody who takes progressive technoscience seriously, I must say the whole speech was stained by a futurology that has no future if we are to any. Hell, by the end I felt it was a mercy we weren't subjected to a paragraph on 3D-printing delivering post-scarcity and the Internet of Things!

Tuesday, January 20, 2015

Google Glassholery Will Never Die!

"Startup CEO" David Levine has the sadz for the recent fall from grace of the awkward gawky panoptic Google Glass. He declares himself "perturbed and puzzled" by the glee with which glassholery's fail was greeted across the internet and "puzzled" again by the claim in industry rag CNET that the withdrawal of the product reflects the recognition by Google that few people seem to want the thing.

I'm puzzled that anybody is puzzled, but then I was never tempted to believe, as Levine did and apparently still does, that "Google Glass was literally the beginning of a revolution not just in the wearables sector but mobile as a whole. The concept was big, bold and brash and captured the imagination of the entire industry."

Of course, no crappy mobile device is "literally... a revolution." Gosh, how I love the crackerjack madness of that "literally."

The futurological derangement of reasonable assessment refuses to consider the actual costs, risks, and benefits of an artifact to the diversity of its stakeholders and the diversity of their wants in the diversity of their situations. The futurological at once re-frames technodevelopmental change from a site of stakeholder struggle to a series of stepping stones aspiring toward The Future, as well as re-directing technodevelopmental reflection from stakeholder deliberation to a consumer fandom providing escapism and promising transcendence.

Nobody ever wore the distracting, straining, uncomfortable, alienating Google Glass because it was useful but only because it enabled a kind of futuristic cosplay in denial about what it was doing while invested in a vision of The Future as drab as it is dystopian.

As a True Believer, Levine has learned from the failure of Glass nothing but that it will eventually triumph, naturellement. Zombie eyes and zombie lies forevs! The inevitable next iteration of the Revolution will, he assures us, be fasionable and respectful of privacy -- or, wink wink nudge nudge, much more "unobtrusive" about what it is and what it is doing.

Vive la revolution! Just die already.

Monday, January 19, 2015

If You Have A Brain, A Heart, Or A Conscience Fox News Is The Ultimate No Go Zone

Stop feeling vindicated, hopeful, or smug about recent Fox News apologies for their surreally idiotic and bigoted reports of "No Go Zones" in Europe. The apologies testify to their recognition that they have opened themselves up to powerful attacks and may divide Republican presidential hopefuls threading the bigoted Base needle in costly ways early on that could stick in the general election -- and by accepting their apology (or letting the matter drop now that they have made it, which amounts to the same thing) we collaborate in ensuring the damage they have done remains in force but also that they don't have to pay for it. Their bigot meme is out there now, and everybody's racist uncle will keep the zombie No Go Zones alive and eating brains from here to eternity. The only way to kill the meme is to subvert it with another meme that redirects the politics elsewhere. "Fox News is the Ultimate No Go Zone" is the meme-disruptor that occurred to me after one second's thought. Since "No Go Zones" seem to amount to ghettoized underserved communities, calling them "Now Go Help Zones" or something like that might be another approach to the rhetoric playing out here. Other proposals welcome.

Religious Beliefs Don't Pass Scientific Muster: But That Recognition Goes Both Ways

I agree with those who argue faithful convictions that are not scientifically warranted should not be taught in science classrooms or form the basis of public policy seeking accountable harm-reduction outcomes. I daresay a majority of the people who share my conviction on this score are actually people of some sort of faith or other, even if it also seems obviously true that the vast majority of people who disagree with me are religious fundamentalists.

It actually matters that while science education and public policy should be warranted by scientific criteria, it is also true that faithful beliefs that aren't about facilitating prediction or control but finding one's way to personal legibility, sublimity, or hope, say, need not be warranted by scientific criteria and that their failure to do so actually is not grounds for refuting them, once and for all.

Consistency neither recommends faith -- or any particular faith among the many competing faiths on offer -- nor provides a ground for rejecting faith out of hand. There are other grounds, taste, tradition, vicissitudes of history or personal experience that may do so for some (full disclosure: me included), but they seem to me mostly rather personal.

Certainly I disapprove faith communities that re-write politics in the image of imperial moralizing or science in the image of subcultural signaling -- but I disapprove science advocacy that would reduce aesthetic judgment, moral community, or political reconciliation to its terms for mostly the same reasons.

I'm an atheist -- that is to say I've been a-theist, without god(s), and cheerfully so, for more than thirty years by now -- but the force of my experiences of aesthetic sublimity and of my faith in democratic progress toward equity-in-diversity readily connects me with many who are religious. Again, I'm a atheist, but when atheist advocacy demands scientism or denigrates multiculture or provides a vehicle for racist, sexist or plutocratic reaction I honestly can't say that I feel the remotest connection to those who claim to champion an atheism I share.

2015 SOTU = Ready for Hillary 2016

At least that's what it is sounding like from the previews coming one day out.

Nothing stamps out a radical like putting them on a stamp.

True to form, America tried this first.

Saturday, January 17, 2015

Super Green!

Looks Like Musk Rat Love

Mehumans and soon-to-be-uplifted Great Apes and Cetacea, rejoice! Randian archetype and scam artist Elon Musk has managed to keep the non-story about keeping the world safe from non-existing Robot Gods alive for yet one more day by promising to devote ten million dollars to "run a global research program aimed at keeping AI beneficial to humanity." Geez, how much money does it take to rent a room for a libertechbrotarian circle jerk, anyway? I do hope these men of Science! and also Ethics! didn't let Elon get away with paying in digital muskcoin. "AI leaders" (what on earth could that possibly mean?) declared the prospect of getting millions of dollars "wonderful" but added that they "deserve much more funding than even this donation will provide." So, keep that collection plate nice and full if you hope to get uploaded as cyberangels in Holodeck Heaven one day, Robot Cultists! "I love technology, because it's what's made 2015 better than the stone age", said MIT professor and Future of Life Institute president Max Tegmark, I guess because maybe he got drunk when he heard about the donation? (That's really a quote, and from his own press release, it's right there if you click the link.) With thought leadership like that at the helm, who can doubt acceleration will keep accelerating, but, you know, ethically and stuff, into The Future of Robots we all want because we're not robots?

Thursday, January 15, 2015

Will ValleyWag's Dan Lyons Jump the Robo-Shark?

The first post arrived last Friday: "Stephen Hawking sees the danger of artificial intelligence. So does Elon Musk. Oxford professor Nick Bostrom, head of the Future of Humanity Institute, has written a whole book about it. Even the scientists at Google DeepMind, who are developing artificial intelligence, seem a little spooked about it... Since it's a new year, and since it's the weekend, why don't we ponder the possibility that sometime in the (relatively speaking) not-too-distant future, our miserable species will vanish."

"The danger." Is there one? "The Institute." Is it Very Serious? "A whole book." Is it worth reading? "The scientists." Scientists, are they? "Are developing." ARE they now?

A fine critique of the arguments and the cultists making the arguments appeared soon (click the link for all of it, I've merely excerpted it), written by a reader who also contributes excellent comments in the Moot here, "Esebian,"
I'm sorry, but you made this blog jump the fucking shark. Valleywag was always about exposing the fraud, the self-aggrandizement and the damages done by SillyCon Valley, but right now you sing along to their marketing department's tune. Because that's exactly what sooper-intelligence and "godlike AIs" really are, publicity stunts. It comes in to flavors: A) rehashed religious paradise fantasies about the New Computer God creating utopia with nanomachines fulfilling all material needs and building sooper-great robot bodies for us to "upload" into and become immortal or B) doom 'n' gloom stories about Skynet nuking the puny hoo-mans. Both of them are supremely masturbatory and completely divorced from reality... The integrity of this blog is at stake! You're playing right into the hands of techbro scam artists and crackpot cultists. Don't let this site be dragged into the transhumanist/Singularitarian mire.
This comment and a couple more exposed the reductive, hyperbolic pseudo-scientific nonsense on which this super-AI fearmongering is premised and seemed to receive fairly supportive responses from Lyons. Fine, then, thought I.

And then three days ago Lyons posted another bite at the super-AI poisoned apple, and although it snarked about some Hollywood types we would be foolish to take seriously on this issue (as apparently against Robot Cultists who are taken quite seriously on this issue instead) the substance of the case for the Very Serious worry about the "existential risk" of super-AI was presented at great length and with little substantial qualification.

Incredibly, a third post on the topic appeared today. The full text: "The battle: Elon Musk versus killer robot with AI brain. On one side, enormous intelligence combined with a completely ruthless, amoral worldview and limitless resources. On the other side, a robot. Who will win?" Of course, one cannot help but provide the action-flick trailer voiceover -- "In a world of killer robots, Iron Man Elon Musk girds his loins for battle!" -- but it intrigues me that behind the implied snark this is essentially nothing but a recapitulation of the straight robocultic line.

As "Esebian" noted, ValleyWag offers quite a lot of critique of tech hyperbole and vacuity as well as documenting the atrocities of celebrity-ceo sociopathy and tech-guru weirdness and vapidity, all of which are only too readily applicable to the topic at hand. This would not appear to be the spirit in which these pieces are being offered up, however, and the example of sister-site io9's regular capture by transhumanoid proselytizing (Dumb Dvorsky, I'm looking at you!) at the expense of the substantive sfnal literary/cultural critique they do pretty well otherwise gives me reasons to worry.

Techno-transcendentalism is the reductio ad absurdum of reactionary corporate-military digi-utopian fraud and plutocratic skim-and-scam -- it amplifies the status quo while pretending to be a radical critique. One hopes the critics and contrarians of ValleyWag retain the sense and standards to grasp the difference, else they will become a promotional rather than satirical response to SillyCon VC sub(cult)ure. If it helps, wags, do read this.

AI Isn't A Thing

People who flutter their hands over the "existential risk" of the theoretically impoverished, serially failed project of good old-fashioned artificial intelligence (GOFAI) or its techno-transcendental amplification into a post-biological super-intelligent Robot God (GOD-AI) think they are worried about a thing. They think they are experts who know stuff about a thing that they are calling "AI." They can get in quite a lather arguing over the technical properties and sociopolitical entailments of this thing with just about anybody who will let them.

But their "AI" does not exist. Their "AI" does not have properties. Their "AI" is not on the way.

Their "AI" is a bunch of fancies bounded by stipulations. Their "AI" stands in the loosest relation to the substance of real code and real networks and their real problems and real people doing real work on them here and now.

"AI" is a discourse, and it serves a primarily ideological function: It creates a frame -- populated with typical conceits, mobilizing customary narratives -- through which real problems and complex phenomena are being miscomprehended by technoscientific illiterates, acquiescent consumers, and wish-fulfillment fantasists. Ultimately, the assumptions and aspirations investing this frame have to do with the promotion and advertizing of commodities, software packages, media devices and the resumes of tech-talkers. At their extremity, these assumptions and aspirations mobilize and substantiate the True Belief of techno-transcendentalists given over to symptomatic fears of mortality, vulnerability, contingency, error, lack of control, but it is worth noting that the appeal to these irrational fears and passions merely amplify (in a kind of living reductio ad absurdum) the drives consumer advertizing and venture-capitalist self-promotion always cater to anyway.

Actually-existing biologically-incarnated consciousness, intelligence, and personhood look little like the feedback mechanisms of early cyberneticists and less still like the computational conceits of later neurocomputationalists. Bruce Sterling said nothing but the obvious when he pointed out that the brain is more like a gland than a computer. Living people don't look any more like the Bayesian calculators of alienated robocultic sociopaths than they look like the monomaniacal maximizers of political economy's no less sociopathic homo economicus.

So, of course, "The Forbin Project" and "War Games" and "The Terminator" and "The Lawnmower Man" and "The Matrix" are movies -- everybody knows that! Of course, our computers are not going to reach critical mass and "wake up" one day, any more than our complex and dynamic biosphere will do. Moore's Law is not spontaneously going to spit out a Robot God any more than an accumulating pile of abacuses would -- not least due to Jeron Lanier's corollary to Moore's Law: "As processors become faster and memory becomes cheaper, software becomes correspondingly slower and more bloated, using up all available resources."

Again, everybody knows all that. But can everybody be expected to talk or act like people who know these things? Sometimes, the exposure of the motives and hyperbole and deception of AI ideology will lead its advocates and enthusiasts to concessions but not to the relinquishment of the ideology itself. Even if we do not need to worry about making Hal our pal, even if AI will not assume the guise of a history-shattering super-parental Robot God... what if, they wonder, somebody codes some mindless mechanism that is satanic by accident or in the aggregate, like a vast robo-runaway bulldozer scraping the earth of its biological infestation, a software glitch that releases an ubergoo waveform transforming the solar system into computronium for crunching out pi for all eternity?

The arrant silliness of such concerns is exposed the moment one grasps that security breaches, brittle code, unfriendly interfaces, mindless algorithms resulting in catastrophic (and probably criminal) public decisions are all happening already, right now. There are people working on these problems, right now. The pet figures and formulations, the personifications, moralisms, reductions and triumphalisms of AI discourse introduce nothing illuminating or new into these efforts. If anything, AI discourse encourages its adherents to assess these developments not in terms of their actual costs, risks, and benefits to the diversity of their actual stakeholders, but to misread them as stepping stones along the road to The Future AI, signs and portents in which is glimpsed the imminence of The Future AI, thus distracting from the present reality of problems to the imagined future into which symptomatic fears and fancies are projected.

So, too, sometimes the exposure of the irrational True Belief of adherents of AI-ideology and the crass self-promotion and parochial profit-taking of its prevalent application in consumer advertizing and the pop-tech journalism will lead its advocates and enthusiasts to different concessions. Sure, it turns out that Peter Thiel and Elon Musk are hucksters who pulled insanely lucrative skim-and-scam operations over on technoscientific illiterates and now want to consolidate and justify their positions by promoting themselves as epochal protagonists of history. And, sure, Ray Kurzweil and Eliezer Yudkowsky are guru-wannabes spouting a lot of pseudo-scientific pseudo-philosophical pseudo-theological nonsense while looking for the next flock to fleece. But what if there are real scientists and entrepreneurs and experts somewhere doing real coding and risking real dangers in their corpoate-military labs, quietly lost in their equations, unaware that they are coding the lightning that will convulse the internet corpse into avid Frankensteinian life?

Of course, the very robocultic nonsense disdained in such recognitions has found its way to the respectability and moneybags of Google, DARPA, Oxford, Stanford, MIT. And so, to imagine some deeper institutional strata where the really serious techno-transcendental engines are stoked actually takes us into conspiratorial territory rather quickly. Indeed, this fancy is a mirror image of the very pining one hears from frustrated Robot Cultists who know all too well in their heart of hearts that nobody is out there materializing their daydreams/nightmares for them, and so one hears time and time again the siren call for separatist enclaves, from taking over tropical islands or building offshore pirate utopias on oil rigs to huddling bubbled under the sea or taking a buckytube space elevator to their private L5 torus or high-tailing it out to their nanobotically furnished treasure cave -slash- mad scientist lab in the asteroid belt to do some serious cosmological engineering.

Again, it is utterly wrong-headed to think there are serious technical types working on "AI" -- because there is nothing for them to be working on. Again, "AI" is just a metaphorization and narrative device that enables some folks to organize all sorts of complex technical and political developments into something that feels like sense but is much more about wishes than working. The people solving real problems with code and technique and policy aren't doing "AI" and to read what they are doing through AI discourse is fatally to misread them. It is only a prior investment in the assumptions and aspirations, figures and frames of AI discourse that would lead anybody to think one should relinquish the scrum of real-world problem solving and ascend instead to some abstract ideality the better to formulate a "roadmap" with which to retroactively imbue technoscientific vicissitudes with Manifest Destiny or to treat as "the real problem" the non-problem of crafting humanist Asimovian injunctions to constrain imaginary robots from imaginary conflicts they cause in speculative fictions.

You don't have to worry about things nobody is working on. You shouldn't pin your hopes or your fears on pseudo-philosophical fancies or pseudo-scientific follies. You don't have to ban things that don't and won't exist anyway, at any rate not in the forms techno-transcendentalists are invested in. There are real things to worry about, among them real problems of security, resilience, user-friendliness, interoperability, surveillance. "AI" talk won't help you there. That should tell you right away it works instead to help you lose your way.

Wednesday, January 14, 2015

Futurology's Shortsighted Foresight on AI

Also posted at the World Future Society.
 The idea of a ban on "existentially-risky" artificial intelligence -- a term which is concerned with quite a lot of stuff that isn't or wouldn't be intelligent -- is momentarily very much in the news right now (or what passes for news in the illiterate advertorial pop-tech press) due to a recent Open Letter from the Future of Life Institute -- an "institute" which is concerned with quite a lot of stuff that isn't or wouldn't be alive. This Letter happens to be getting a lot of signatures from celebrities and celebrity CEOs, but also some computer scientists who are no more expert than you or me or Alan Alda (who has signed the Letter) to wade into the philosophy of consciousness or personhood at issue.

Actually, many of the signatories to the letter are outright boosters, one might even say dead-enders, for the serially failed project of good old fashioned artificial intelligence (GOFAI), and while much of the public discussion of AI/superAI in these circles is framed in terms of bans, the Letter itself indulges in loose talk of "responsible oversight" of AI. Mostly, this seems to me to involve giving more money and more attention to the people who still take GOFAI seriously. The key folks behind the Letter are techno-transcendentalists explicitly associated with transhumanist and singularitarian and techno-immortalist movements and sub(cult)ures, and it is interesting how rarely even those ridiculing the Letter are pointing out this fact (you will find Nick Bostrom, George Dvorsky, Ben Goertzel, Elon Musk, Jaan Tallinn, Eliezer Yudkowsky all over my Superlative Summary).Would commenters be so reticent to notice were all these figures to happen to be Raelians or Scientologists?

It is a bit demoralizing to find that the public debate on this topic seems to be settling into one between those who say something on the order of "well, some of these extreme arguments seem a bit crazy, but this problem needs to be taken seriously" versus those who ridicule the debate by joking "I for one welcome our robot overlords" and then declaring that when the Robot God arrives we don't stand a chance. In other words, every position concedes the validity of the topic and its essential terms while at once pretending to step back from it. These gestures essentially concede the field to the futurologists and invigorate the legibility of their AI discourse and hence the profitability of the marketing agenda of the tech companies that deploy it, which is the only victory they want or need in any case.

Now, I for one think that there is no need to ban AI/super-AI because our present ignorance and ineptitude form barriers to its construction incomparably more effective than any ban could do. We lack even the most basic understanding of so many of the phenomena associated with actually-existing biologically-incarnated consciousness, affect, and identity, while our glib attributions of intelligence and personhood to inert objects and energetic mechanisms all attest to the radical poverty of our grasp however marvelous our reach. We don't need to get the problem of the Robot God off the table, because there is no Robot God at the table nor will there be any time soon.

I daresay all this need not be the case forever, after all. Perhaps human civilization will one day confront the danger of AI/super-AI, but that day is not soon -- and those who say otherwise seem to me mostly to be laymen in the field of computer science making claims about the state of the art for which they are unqualified, or computer scientists making philosophical arguments in ways that reveal little philosophical rigor or historical awareness.

There is no reason to think that a sensible assessment of the state of the art in computer programming here and now would undermine reassessment in the future should our models and techniques improve. Indeed, there is every reason to think, to the contrary, that premature concern from our limited perspective will introduce false formulations and figures the legacy of which might interfere with sensible deliberation later when it is actually relevant.

To repeat: I think it is extremely premature to deliberate over banning or regulating non-existing nor soon-to-exist AI/superAI here and now; and, if anything, to do so is more likely to undermine the terms of such deliberation should it eventually become necessary. My critique does not end there, however, since this utterly unnecessary and premature and eventually possibly damaging AI/superAI deliberation here and now is happening nonetheless, and seems to be attracting greater attention, and so does have real effects in the world even without any justification on its own terms or real objects of concern.

This takes me to a critical proposal at a different level: namely, that the time and money and the conferral of authority on "experts" devoted to the "existential risk" of unregulated/unfriendly AI/superAI functions to divert resources and attention from actual problems and actually relevant experts, and indeed is sometimes mobilized precisely to trivialize urgently real problems (as the increasingly influential Nick Bostrom's worries about AI are directly connected to a rejection of the scope of anthropogenic climate change as a public problem, for example).

Returning to the Letter's recommendation of "responsible oversight," consider this paradoxical result: nobody can deny that there are incredible problems and enormous risks associated with the insecurity of networked computers and with user-unfriendliness of programs and with the dangerous political consequences of substituting algorithms for judgments about human lives. Such questions are usually not the focus of the futurological discourse of AI/superAI, and usually serve at best as dispensable pretexts or springboards for heated "technical" discussion debating the Robot God odds of robocalypse or roborapture. Indeed, it is one of the more flabbergasting consequences of AI/super-AI discourse that they not only distract from actually real problems of computation, but that AI/superAI discourse becomes a distortive lens of false and facile personifying figures and moralizing frames that confuse the relevant terms and stand in the way of deliberation over the problems at hand.

Incredibly, if AI/superAI eventually does become a matter of real concern in anything remotely like the terms that preoccupy futurologists I would say we will be better prepared to cope with it through ongoing and gathering practical experience with actual coding problems as they actually exist, than ignoring reality and instead imagining idealized future machines from our present, parochial, symptomatic perspective.

The primary impact of AI/superAI discourse as it ramifies in the public imaginary has been instead to denigrate human intelligence as it actually exists: calling cars "smart" to sell stupid unsustainable car-culture to consumers, calling credit cards "smart" to seduce people into ubiquitous surveillance the better to harass them with targeted ads, and to rationalize crappy "AI" software like autocorrect and crappy computer-mediated "smart" analyses like word-clouds, and crappy "decision" algorithms to determine who gets to start a business or who gets to be extrajudicially murdered by a drone as a potential "terrorist." As always, talk of artificial intelligence yields artificial imbecillence above all.

AI discourse in its prevalent forms solves no real problems, is not equipped to deal with eventual problems, and functions in the present to cause catastrophic problems. It seems to be of use primarily as a way to promote crappy computation for short-term plutocratic profit.

It is no surprise that this shortsightedness is what futurologists and tech-talkers would peddle as "foresight."

The Artificial Man Returns!

I find it impossible to believe that Mitt Romney is really making a third bid for the presidency, and assume this is some sort of gross rich white dude pissing contest with Bush over who is the still-is and who is the has-been in the Greedy Olds Party or whatever, but I honestly found it pretty impossible to believe he was making the second bid he was actually making when he was doing that either, so who knows? Certainly I never thought there would be any excuse at all to remind anybody of my fable, The Artificial Man the Killer Clowns Made and the Mouse Child Who Said What She Saw, but here we are and here it is.

Monday, January 12, 2015

Looking In All the Wrong Places

Curious the number of champions of artificial intelligence and artificial life who are unintelligent and don't have a life.

More Futurological Brickbats here.

Foodie Futurism

I'm a lacto-ovo vegetarian now, but obviously in The Future will be a digi-nano vegetarian.

More Futurological Brickbats here.

Nourishing Nothingness: Futurists Are Getting Virtually Serious About Food Politics

I'm a lacto-ovo vegetarian now, but obviously in The Future will be a digi-nano vegetarian...
Salon has alerted me to the existence of a new SillyCon Valley startup, Project Nourished, which hopes to use synesthetic cues from a virtual reality helmet, vibrating spork, and whiffs from a perfume atomizer to fool America's obese malnourished gluttons that they are feasting on two-pound steaks and baskets of onion rings and death by chocolate sundaes when in fact they are eating gelatinous cubes of zero-calorie vitamin-fortified goo.

According to the breathless website, this proposal will "solve" the following problems: "anorexia, bulimia, cancer, diabetes, heart disease, obesity, allergies and co2 omissions."

The real problem solved by the project is that it definitively answers a question I have long pondered: Is futurology so utterly idiotic and smarmy that it is actually impossible to distinguish its most earnest expressions from even the most ridiculous parodies of them?

I mean, to literally name your project "nourish" while actually avowing you seek to peddle a product that nourishes no one is pretty breathtaking. It's like the scam of peddling sugary cereals as part of "this complete nutritious breakfast," when all the nourishment derives from the juice and eggs and toast accompanying the bowl in the glossy photo but almost never in the event of an actual breakfast involving the cereal in question. Except now, even the cereal isn't really there, but a bowl of packing cardboard over which is superimposed an image of Fruit Loops with a spritz of grapefruit air-freshener shot in your nostril every time you take a bite.

Why ponder structural factors like the stress of neoliberal precarity or the siting of toxic industries near residences or the lack of grocery stores selling whole foods within walking distances or the punitive mass mediated racist/sexist body norms that yield unhealthy practices, eating disorders, the proliferation of allergies and respiratory diseases and so on? Why concern yourself with public investment in medical research, heathcare access, vegetarian awareness, zoning for walkability, sustainable energy and transportation infrastructure and so on?

The Very Serious futurologists have a much better technofix for all that -- it's kinda sorta like the food pills futurologists have been promising since Gernsback, but now you would eat large empty candy colored polyhedra (you know, like the multisided dice nerds used to use to play D&D in the early 80s) while sticking your head in a virtual reality helmet (you know, like the virching rigs techbros have been masturbating over since the late 80s). Also, too, the stuff would be 3D-printed, because if you are a futurologist you've gotta get 3D-printing in there somewhere. As I said, Very Serious!

Returning to the website, we are told, "the project was inspired by the film Hook, where Peter Pan learns to use his imagination to see food on a table that seemed completely empty at first." Setting aside the aptness of drawing inspiration from a crappy movie rather than the actual book on which it is based -- only Luddites think books have a future, shuh! -- I propose that Project Nourish has a different filmic inspiration:

Sunday, January 11, 2015

Still Not Charlie

If free speech is what you are defending, then voicing public criticism of Hebdo is the same protest and celebration as declaring you are Charlie.

Saturday, January 10, 2015

Uploading As Reactionary Anti-Body Politics

A reader in the Moot describes some typical transhumanoid versions of "doing radical social criticism... saying something along the lines of, say, gender won't matter anymore when we upload our minds to the noosphere." For transhumanoid radical race critique fill in the blank (and try not to think too much about the history of eugenics, or how transhumanists seem to be a whole lot of white guys), for transhumanoid radical class critique here comes NanoSanta Clause.

Of course, not only is this not "doing radical social criticism" but it seems to me pretty explicitly straightforwardly reactionary, even when accompanied by citations of actual feminist, queer, or anti-racist criticism. Complacent consumers who want to enjoy a little liberal guilt to spice their entertainments will always rationalize the violence and inequity of the present the question by declaring the debased now better than before and on the road to better still and then grabbing a beer from the fridge, or clicking the buy button, or getting out on the dancefloor.

Plutocrats always naturalize their hierarchies as meritocracies. In much the same way, the whole robocultic uploading schtick is obviously a denigration of materiality of the body, and it is always of course the white body male body straight body cis body healthy body capacious body that can best disavow its materiality because its materiality isn't in question or under threat, right?

It can be a mark more of privilege than perceptiveness to call into question that which won't ever be in question for you whatever. The bodily is always constituted as such through technique (from language to body language to posture to wearability), and the social legibility of every body is of course performatively substantiated. To grasp that point is to trouble or question the prediscursivity of the body or to recognize that prediscursivity is always a discursive effect. But this recognition is at best a point of departure and never the end-point for the interrogation of prevailing normative bodies and their abjection of bodily lifeways otherwise.

The denial or disavowal of differences that make a difference is much more likely effectively to endorse than efface them. Imaginary digi-utopian and medi-utopian circumventions of raced, gendered, abled bodily differences function in the present to deny or disavow rather than critically or imaginatively interrogate their terms. These omissions are all the more egregious when we actually turn our minds even cursorily to the perniciously raced and sexed histories of the medical and the digital as actually-existing practical, normative, professional sites.

Setting aside questions of the utter implausibility and incoherence of the techno-transcendental wish-fulfillment fantasies playing out in all this, why even pretend that recourse to digital dematerialization or to medical enhancement would circumvent rather than express the fraught, inequitable legibility and livability of wanted lifeway diversity? It will surely be the more urgent task to attend closely to the ways in which these very differences, race, sex, ability, shape the distribution of costs, risks, benefits, access and information to actually-available prosthetic possibilities. 

I must say it has always cracked me up that since all information is instantiated on a material carrier, then even on their own terms the spiritualization of digi-info souls is hard to square with the reductionist scientism these folks tend to congratulate themselves over -- not that it would be anything to be proud of even if they managed to be more consistently dumb in that particular way.

What can you really expect from techno-transcendentalists apparently so desperate not to grow old or die that they will pretend a scan of them would be them when no picture ever has been and that computer networks could reliably host their "info-souls" forever when most people long outlive their crufty, unreliable computer networks in reality, and all just so they can day dream they will be immortal cyberangels in Holodeck Heaven? Science!

Rebranding Crime As Terror

Treating certain acts of public violence as terrorism rather than criminality always seems to derange discussions both of what happened and what should be done. 

What does it mean when people are more afraid of, or at any rate more exercised by, comparatively rare incidences of terrorist violence than they are comparatively more commonplace incidences of criminal violence? What does it mean when responses to terror undermine definitive civil liberties and utterly scramble budgetary priorities on the spur of a moment of public panic while responses to generations of inequitable policing and punishment move at a snail's pace despite long-plummeting crime rates and longstanding community protests? 

I realize, of course, terrorism seeks to provoke political responses as much criminality does not, but it matters that what follows from making a distinction of terrorism from criminality based on this recognition tends to facilitate precisely those sorts of responses that the terrorists are seeking. 

Branding violence as terror is itself terrorizing, terrorism is substantiated as such through the collaboration of the majority in the terms of a marginal minority: it amplifies a marginal threat of violence into an existential threat to civilization, it amplifies a brainwashed tool into a protagonist of history.

The Political Problem With Transhumanisms

Upgraded and Expanded from a response of mine to some comments in the Moot: Well, I think probably the key conceptual problem with transhumanisms is that they have an utterly uninterrogated idea of "technology" that pervades their discourses and sub(cult)ures. They attend very little to the politics of naturalization/ de-naturalization, of habituation/ de-familiarization that invest some of techniques/artifacts (but not others, indeed probably not most others) with the force of the "technological." Quite a lot of the status quo gets smuggled in through these evasions and disavowals, de-politicizing what could be done or made otherwise, and hence rationalizing incumbency. Whatever the avowed politics of a transhumanist, their depoliticization of so much of the field of the cultural-qua-prosthetic lends itself to a host of conservative/reactionary naturalizations in my view.

This is all the more difficult for the transhumanists to engage in any thoughtful way, since they are so invested in the self-image of being on the bleeding edge, embracing novelty, disruption, anti-nature, and so on. I daresay this might have been excusable in the irrationally exuberant early days of the home computer and the explosive appearance of the Web (I saw through it at the time, though, so it can't have been that hard, frankly), but what could be more plain these days at least than the realization how much "novelty" is merely profitably repackaged out of the stale, how much "disruption" is just an apologia for all too familiar plutocratic politics dismantling public goods?

Transhumanists turn out to fall for the oldest Madison Avenue trick in the book, mistaking consumer fandoms as avant-gardes. And then they fall for the same sort of phony radicalism as so many New Atheists do: mirroring rather than rejecting religious fundamentalism by recasting politics as moralizing around questions of theology; distorting the science they claim to champion by misapplying its norms and forms to moral, political, aesthetic, cultural domains beyond its proper precinct. (The false radicalism of scientism -- not science, scientism -- prevails more generally in technocratic policy-making practices in corporate-military think-tanks and in elite design discourses, many of which fancy themselves or at any rate peddle themselves as progressive, and transhumanist formulations lean on these tendencies in their bids for legitimacy but also these already prevailing practices and discourses are vulnerable to reframing in transhumanist terms; there are dangerous opportunities for reactionary politics going in both directions here.)

Transhumanists indulge what seems to me an utterly fetishistic discourse of technology -- in both Marxist and Freudian senses -- out of which a host of infantile conceits arrive in tow: Failing to grasp the technical/performative articulation of every socially legible body, cis as much as trans, "optimal" as much as "disabled," they fetishistically identify with cyborg bodies appealing to wish-fulfillment fantasies they seem to have consumed more or less wholesale from advertizing and Hollywood blockbusters. Failing to grasp the collective/interdependent conditions out of which agency emerges, they grasp at prosthetic fetishes to prosthetically barnacle or genetically enhance the delusive sociopathic liberal "rugged/possesive individual" in a cyborg shell, pretty much like any tragic ammosexual or mid-life crisis case does with his big gun or his sad sportscar.

I have found technoprogressives to be untrustworthy progressives (I say this as the one who popularized that very label), making common cause with reactionaries at the drop of a hat, too willing to rationalize inequity and uncritical positions through appeals to eventual or naturalized progress -- progress is always progress toward an end, and its politics are defined by the politics of that end, and the substance of progress is not the logical or teleological unfolding of entailments but an interminable social struggle among a changing diversity of stakeholder -- whatever they call themselves techno-fixated techno-determinisms are no more progressive than any other variation of Manifest Destiny offered up to congratulate and reassure incumbent elites.

Time and time again in my decades long sparring with futurologists both extreme and mainstream I have confronted in my interlocutors curious attitudes of consumer complacency and uncritical techno-fixation, as well as more disturbing confessions of fear and loathing: fear of death and hostility to mortal, aging, vulnerable body, fear of error or humiliation and hostility to the contingency, errancy, boundedness of the biological brains and material histories in which intelligence are incarnated. To say this -- which is to say the obvious, I fear -- usually provokes howls of denial and disavowal, charges of ad hominem and hate speech, and so I will conclude on a different note: Again, I don't think any of these transhumanist susceptibilities to reaction are accidental or incidental, but arise out of the under-interrogated naturalized technological assumptions and techno-transcendental aspirations on which all superlative futurologies/ists so far have definitively depended.

Friday, January 09, 2015

Republicans Prepare for 2016 Presidential Clown Bus

It is looking more and more like the run-up for the 2015-2016 GOP Presidential nomination will be more a double-decker killer clown bus than the killer clown car we witnessed in 2011-2012, as this time around both the megaphone bigot plutocratic Base tier (Huckabee, Cruz, Paul, and so on) and the stealth bigot plutocratic Establishment tier (Christie, Bush, Kasich, and so on) are looking quite competitive. Get ready for a multibillion dollar freakshow ending in a circular firing squad and followed, quite beyond decency or sense, by tall tales of a horse race.

Thursday, January 08, 2015

Robot Gods Are Nowhere So Of Course They Must Be Everywhere

Advocates of Good Old Fashioned Artificial Intelligence (GOFAI) have been predicting that the arrival of intelligent computers is right around the corner more or less every year from the formation of computer science and information science as disciplines, from World War II to Deep Blue to Singularity U. These predictions have always been wrong, though their ritual reiteration remains as strong as ever.

The serial failure of intelligent computers to make their long awaited appearance on the scene has lead many computer scientists and coders to focus their efforts instead on practical questions of computer security, reliability, user-friendliness, and so on. But there remain many GOFAI dead-enders who keep the faith and still imagine the real significance that attaches to the solution of problems with/in computation is that each advance is a stepping stone along the royal road to AI, a kind of burning bush offering up premonitory retroactive encouragement from The Future AI to its present-day acolytes.

In the clarifying extremity of superlative futurology we find techno-transcendentalists who are not only stubborn adherents of GOFAI in the face of its relentless failure, but who double down on their faith and amplify the customary insistence on the inevitable imminence of AI (all appearances to the contrary notwithstanding) and now declare no less inevitable the arrival of SUPER-intelligent artificial intelligence, insisting on the imminence of a history-shattering, possibly apocalyptic, probably paradisical, hopefully parental Robot God.

Rather than pay attention to (let alone learn the lessons of) the pesky failure and probable bankruptcy of the driving assumptions and aspirations of the GOFAI research program-cum-ideology, these techno-transcendentalists want us to treat with utmost seriousness the "existential threat" of the amplification of AI into a superintelligent AI in the wrong hands or with the wrong attitudes. I must say that I for one do not agree with Very Serious Robot Cultists at Oxford University like Nick Bostrom or at Google like Ray Kurzweil or celebrity tech CEOs like Elon Musk that the dumb belief in GOFAI becomes a smart belief rather than an even dumber one when it is amplified into belief in a GOD-AI, or that the useless interest in GOFAI becomes urgently useful rather than even more useless when it is amplified into worry about the existential threat of GOD-AI because it would be so terrible if it did come true. It would be terrible if Godzilla or Voldemort were real, but that is no reason to treat them as real or to treat as Very Serious those who want to talk about what existential threats they would pose if they were real when they are not (especially when there are real things to worry about).

The latest variation of the GOFAI via GOD-AI gambit draws on another theme beloved by superlative futurologists, the so-called Fermi Paradox -- the fact that there are so very many stars in the sky and yet no signs that we can see so far of intelligent life out there. Years ago, I proposed
The answer to the Fermi Paradox may simply be that we aren't invited to the party because so many humans are boring assholes. As evidence, consider that so many humans appear to be so flabbergastingly immodest and immature as to think it a "paradoxical" result to discover the Universe is not an infinitely faceted mirror reflecting back at us on its every face our own incarnations and exhibitions of intelligence.
I for one don't find it particularly paradoxical to suppose life is comparatively rare in the universe, especially intelligent life, and more especially still the kind of intelligent life that would leave traces of a kind human beings here and now would discern as such, given how little we understand about the phenomena of our own lives and intelligence and given the astronomical distances involved. As the Futurological Brickbat quoted above implies, I actually think the use of the word "paradox" here probably indicates human idiocy and egotism more than anything else.

A recent article in Vice's Motherboard collects a handful of proponents of a "new view" on this question that proposes instead that the "dominant intelligence in the cosmos is probably artificial." The use of the word "probable" there may make you think that there is some kind of empirical inquiry afoot here, especially since all sorts of sciency paraphernalia surrounds the assertion, and its proponents are denominated "astronomers, including Seth Shostak, director of NASA’s Search for Extraterrestrial Intelligence, or SETI, program, NASA Astrobiologist Paul Davies, and Library of Congress Chair in Astrobiology Stephen Dick." NASA and the Library of Congress are institutions that have some real heft, but let's just say that typing the word "transhumanist" into a search for any of those names may leave you wondering a bit about the robocultic company they keep.

But what I want to insist you notice is that the use of the term "probability" in these arguments is a logical and not an empirical one at all: What it depends on is the acceptance in advance of the truth of the premise of GOFAI via GOD-AI which is in fact far from obvious at all that anyone would sensibly take for granted. Indeed, I propose that like many arguments offered up by Robot Cultists in more mainstream pop-tech journalism, the real point of the piece is to propagandize for the Robot Cult by indulging in what appear to be harmless blue-sky speculations of science fictional conceits but which entertain as true and so functionally bolster what are actually irrational and usually pernicious articles of futurological faith.

The philosopher Susan Schneider (search "Susan Schneider transhumanist," go ahead, try it) is paraphrased in the article saying "when it comes to alien intelligence... by the time any society learns to transmit radio signals, they’re probably a hop-skip away from upgrading their own biology." This formulation buries the lede in my view, and quite deliberately so. That is to say, what is really interesting here -- one might actually say it is flabbergasting -- is the revelation of a string of techno-transcendental assumptions: [one] that technodevelopmental vicissitudes are not contingently sociopolitical but logically or teleologically determined; [two] that biology could be fundamentally transformed while remaining legible to the transformed (that's the work done by the reassuring phrase "their own"); [three] that jettisoning biological bodies for robot bodies and "uploading" our biological brains into "cyberspace" is not only possible but desirable (make no mistake about it, that is what she is talking about when she talks about "upgrading biology" -- by the way, the reason I scare-quote words like "upload" and "cyberspace" is because those are metaphors not engineering specs, and unpacking those metaphors exposes enough underlying confusion and fact-fudging that you may want to think twice about trusting your "biological upgrade" to folks who talk this way, even if they chirp colloquiually at you that your immortal cyberangel soul-upload into Holodeck Heaven is just a "hop-skip away" from easy peasy radio technology); and [four] that terms like "upgrade," freighted as they are with a host of specific connotations derived from the deceptive hyperbolic parasitic culture of venture-capitalism and tech-talk, are the best way to characterize fraught fundamental changes in human lives to be brought about primarily by corporate-military incumbent-elites seeking parochial profits. Maybe you want to read that last bit again, eh?

Seth Shostak quotes from the same robocultic catechism a paragraph later: “As soon as a civilization invents radio, they’re within fifty years of computers, then, probably, only another fifty to a hundred years from inventing AI... At that point, soft, squishy brains become an outdated model.” Notice the same technological determinism. Notice that the invention of AI is then declared to be probable within a century -- and no actual reasons are offered up in support of this declarations and it is made in defiance of all evidence to the contrary. And then notice suddenly we find ourselves once again in the moral universe of techno-transcendence, where Schneider assumed robot bodies and cyberspatial uploads would be "upgrades" (hop-skipping over the irksome question whether such notions are even coherent or possible on her terms, whether a picture of you could be you, whether fetishized prosthetization would be enhancing to all possible ends or disabling to some we might come to want or immortalizing when no prostheses are eternal, etc etc etc etc) Shostak leaps to the ugly obverse face of the robocultic coin: "soft, squishy brains" are "outdated model[s]." Do you think of your incarnated self as a "model" on the showroom floor, let alone an outdated one? I do not. And refusing such characterizations is indispensable to resisting being treated as one. Maybe you want to read that last bit again, eh?

“I believe the brain is inherently computational -- we already have computational theories that describe aspects of consciousness, including working memory and attention,” Schneider is quoted as saying in the article. "Given a computational brain, I don’t see any good argument that silicon, instead of carbon, can’t be a excellent medium for experience.” Now, I am quite happy to concede that phenomena enough like intelligence and consciousness for us to call them that might in principle take different forms from the ones exhibited by conscious and intelligent people (humans animals and I would argue also some nonhuman animals) and be materialized differently than in the biological brains and bodies and historical struggles that presently incarnate them.

But conceding that logical possibility does not support in the least the assertion that non-biological intelligences are inevitable, that present human theories of intelligence tell us enough to guide us in assessing these possibilities, that human beings are on the road to coding such artificial intelligence, or that current work in computer theory or coding practice shows any sign at all of delivering anything remotely like artificial intelligence any time soon. Certainly there is no good reason to pretend the arrival of artificial intelligence (let alone godlike superintelligence) is so imminent that we should prioritize worrying about it over deliberation about actually real, actually urgent, actually ongoing problems like climate change, wealth concentration, exploited majorities, neglected diseases, abuse of women, arms proliferation, human trafficking, military and police violence.

What if the prior investment in false and facile "computational" metaphors of intelligence and consciousness are evidence of the poverty of the models employed by adherents of GOFAI and are among the problems yielding its serial failure? What if such "computational" frames are symptoms of a sociopathic hostility to actual animal intelligence or simply reveal ideological commitments to the predatory ideology of Silicon Valley's unsustainable skim-and-scam venture capitalism?

Although the proposal of "computational" consciousness is peddled here as a form of modesty, as a true taking-on of the alien otherness of alien intelligence in principle, what if these models of alien consciousness reflect most of all the alienation of their adherents -- the sociopathy of their view of their own superior computational intellects and their self-loathing of the frailties in that intellect's "atavistic" susceptibility to contingency, error, and failure -- rather than any embrace of the radical possibilities of difference?

It is no great surprise that the same desperate dead-enders who thought they could make the GOFAI lemon into GOD-AI lemonade would then go on to find evidence of the ubiquity of that GOD-AI in the complete lack of evidence of GOD-AI anywhere at all. What matters about the proposal of this "new view"on the Fermi Paradox is that it requires us to entertain as possible, so long as we are indulging the speculation at hand, the very notion of GOFAI that we otherwise have absolutely no reason to treat seriously at all.

Exposing the rhetorical shenanigans of faith-based futurologists is a service I am only to render, of course, but I do want to point out that even if there are no good reasons to treat the superlative preoccupations of Robot Cultists seriously on their own terms (no, we don't have to worry about a mean Robot God eating the earth; no, we don't have to worry about clone armies or designer baby armies or human-animal hybrid armies taking over the earth; no, we don't have any reason to expect geo-engineers from Exxon-Mobil to profitably solve climate change for us or gengineers to profitably solve death and disease for us or nanogineers to profitably solve poverty for us) there may be very good reasons to take seriously the fact that futurological frames and figures are taken seriously indeed.

Quite apart from the fact that time spent on futurologists is time wasted in distractions from real problems, the greater danger may be that futurological formulations derange the terns of our deliberation on some of the real problems. Although the genetic and prosthetic interventions techno-triumphalists incessantly crow about have not enhanced or extended human lifespans in anything remotely like radical ways, the view that this enhancement and extension MUST be happening if it is being crowed about so incessantly has real world consequences, making consumers credulous about late-nite snake-oil salesmen in labcoats, making hospital administrators waste inordinate amounts for costly gizmos and ghastly violations for end-of-life care, rationalizing extensions of the retirement age for working majorities broken down by exploitation and neglect. Although the geo-engineering interventions techno-triumphalists incessantly crow about cannot be coherently characterized and seem to depend on the very funding and regulatory apparatuses the necessary failure of which is usually their justification, the view that such geo-engineering MUST be our "plan B" or our "last chance" provides extractive-industrial eco-criminals fresh new justifications to deny any efforts at real world education, organization, legislation to address environmental catastrophe. The very same techno-deterministic accounts of history techno-triumphalists depend on for their faith-based initiatives provided the rationales to justify the indebtedness to their former occupiers -- in the name of vast costly techno-utopian boondoggles like superdams and superhighways and skyscraper skylines -- in nations emerging from colonial occupation and then the imposition of austerity regimes that returned them to conditions of servitude.

Although I regard as nonsensical the prophetic utterances futurologists make about the arrival any time soon, or necessarily ever, of artificial intelligence in the world, I worry that there are many real world consequences of the ever more prevalent deployment of the ideology of artificial life and artificial intelligence by high-profile "technologists" in the popular press. I worry that the attribution of intelligence to smart cards and smart cars and smart phones, none of which exhibit anything like intelligence, confuses our sense of what intelligence actually is and risks denigrating the intelligence of the people with whom we share the world as peers. To fail to recognize the intelligence of humans risks the failure to recognize their humanity and the responsibilities demanded of us inhering in that humanity. Further, I worry that the faithful investment in the ideology of artificial intelligence rationalizes terrible decisions, justifies the outsourcing of human judgments to crappy software that corrects our spelling of words we know but it does not, recommends purchases and selects options for us in defiance of the complexities and dynamism of our taste, decides whether banks should find us credit-worthy whatever our human potential or states should find us target-worthy whatever our human rights.

Futurology rationalizes our practical treatment as robots through an indulgence in what appears to be abstract speculation about robots. The real question to ask of the Robot Cultists, and of the prevailing tech-culture that popularizes their fancies, is not how plausible their prophesies are but just what pathologies do these prophesies symptomize and just what constituencies do they benefit.