When Arcade Fire released their newest single (Reflektor), heralding the release of their new album of the same name, I realized that I am long overdue on my intention to analyze (and maybe even defend) their album The Suburbs (2010[!]). Though the album is laden with critical praise and awards (Album of the Year at the Juno and Grammy Awards, and on the top of several best-albums-of-the-year lists), it has gotten a far more muted reaction from its fans than it deserves. Obviously, by merely writing this post, I’ve already tipped my hand as to what I think of the album, but I will try to avoid any aesthetic statements about the album, and cut straight to the interpretation. However, to adequately unpack Arcade Fire’s third full-length studio album, a bit of context is necessary.
Fittingly, a large portion of the blame for the almost faddish apathy The Suburbs has received can be attached to the reaction, overreaction to that reaction, and subsequent analysis and posturing in light of these reactions. In short: hipsterism.
It is almost a tautology to say that hipsterism and Arcade Fire are no strangers. One might even trace the current incarnation of hipsterism to right around the same time period that the Arcade Fire burst into the public psyche, around 2003-04. In some ways, Arcade Fire was the vanguard of this new hipsterism in general, with provocatively indie and millennial methods and themes. They are a large group of multi-instrumentalists, with some of the core members explicitly trying not to get “too comfortable” on any given instrument to keep the sounds elemental and orchestral rather than going for an attention-grabbing solo. Their early themes involved building a family out of peers and loved ones after leaving the nest, feeling thrust into a harsh and indifferent world without the tools to survive, lovesickness, feeling the call to action, feeling the need to rebel and the pain of being an adult when you were just getting the hang of adolescence. They exult in the power of solidarity and community in the face of difficulty (especially Funeral, which was in part a reaction to the deaths of several family members). And like many 21st century hipsters, their aesthetics are somewhat turn-of-the-20th-century; with their liberal use of strings and organs, they were practically shaving the way for the waxed mustaches of their fans (not to downplay the influence of The Decemberists, mixologists, and the like). Combine the fun and funky style with a heavy dose of real virtuosity and you’ve got an album with the second most appearances on end-of-decade Top 10 lists, behind Radiohead’s Kid A, and landing at #151 on Rolling Stone’s 500 Greatest Albums of All Time (not bad for a debut record).
That universal appeal predictably garnered high praise from a variety of culture critics, including some superlative (and maybe unwanted) accolades like the assumption that Arcade Fire would be the band that would “save rock and roll” or MTV’s proclamation that the Arcade Fire was “Rock and Roll Champion of the World” when The Suburbs debuted at #1 of the U.S. and U.K. charts. This weird, wonderful, heavy responsibility provided some of the thematic fuel for the sophomore Neon Bible, an album that was an expression of the band’s reaction to its newfound position in the world. As perhaps the quintessentially indie band, they were trying to relate to their community without controlling it, and that prospect is difficult when you have voluntary cultists. Read more…
This place has become a bit of an echo chamber, seeing as there are only old posts making any noise around here. That being said, we’re about to get some new noises from new contributors (which I’ve been trying to incite for a while), so enjoy a few guest posts (note the byline) for as long as people choose to post here!
For some trickle of content, here’s a nice article on the conundrum that is Outsider Art: The Rise of Self-Taught Artists.
Friends who are still checking this page (if there are any of you), feel free to let me know if you’d like to contribute.
If you listen to the hype, the biggest bomb dropped on the United States in the War on Terror™ wasn’t an actual bomb. Instead, the United States’ body politic was shocked, shocked, to hear that the government has been collecting vast amounts of cell phone records, cell phone location data, and just about all of your online information from just about every major Internet platform (Google, Facebook, Apple, Microsoft, AOL, Yahoo, Skype, and PalTalk[?]). Basically, this information adds up to enough to predict your movements, who talks to whom, and anything you have ever entered into a search bar in a moment of curiosity.
The public shock and temporary outrage has seemed more surprising to some of us than the actual program, the ominously named PRISM, the legalities of which had been settled since the Patriot Act was renewed. But the story has some staying power; it seems to be the dominant narrative in American and European news media right now, despite the fact that countries like Russia and China have taken the news with a bit of a shrug, as if to say, “how did you think these things were done?”
So, has the slumbering giant finally woken? Has this leak precipitated the freak-out that privacy advocates have been waiting for?
No, probably not. In fact, despite the hype, this is hardly anything new. Those like myself who have long advocated more privacy, and observed the likely deleterious effects of public ignorance and deference to the presumedly well-meaning authorities, could say that we saw this coming. Most of us had sort of expected that this sort of spying and data collection was already happening. It’s something the American psyche has long emotionally accepted, the culmination of a series of steps down the road to the inevitable Hobbesian Leviathan.
And how does the Leviathan react to this story? To shift the blame, of course.
That blame, according to the Administration, lies with the whistleblower/leaker/future Hollywood Blockbuster pro-/an-tagonist Edward Snowden, the latest in the Bradley Manning mold of Americans who have decided to conscientiously object to the surveillance state that Americans have been settling into. So now the story has shifted to this man, a literally ad hominem attack, in order to distract and mislead the news media into thinking that the person who collected information on the surveillance state is more important than the surveillance state itself.
Of course, the fact that Snowden himself leaked these documents isn’t anything new. Bradley Manning and Julian Assange were both denounced and lionized for their roles in previous leaks of government misconduct. Manning now faces arcane charges and unspeakable (literally) treatment for his “treason” (read: whistleblowing), and Assange has had to make careful legal moves internationally to avoid extradition for crimes that are barely a pretext for thumbing his nose at the US government. There is at least one consensus among U.S. lawmakers: Snowden committed treason. Presumably they would say the same of Daniel Ellsburg, who leaked the Pentagon Papers. People who cry treason tend to lack a long view on history, after all; they forget that transparency about how government operates serves our democracy more than treason prosecutions.
So Snowden sacrificed his $200,000 salary, his girlfriend with whom he shared a home in Hawaii, and his family, adopting a life of exile because the Obama administration has been hunting down anyone who leaks anything even minimally unflattering. One might say that this incident reinforces the fact that we need some protection for whistleblowers and the places that receive their information (like Wikileaks) so that the public can be involved in the decision of whether this trade-off is appropriate or necessary. As Ellsburg puts it, how else can we evaluate whether “the machinery of our democratic government has entirely broken down“?
At a hearing of the Senate intelligence committee in March this year, Democratic senator Ron Wyden asked James Clapper, the director of national intelligence: “Does the NSA collect any type of data at all on millions or hundreds of millions of Americans?”
“No sir,” replied Clapper.
The scariest part of the program may in fact be just how tone-deaf the security administration is to how evil it sounds. I mean, there are pretty obvious flaws in the logic of the lumbering institutions that we don’t seem to question, like “If the NSA trusted Edward Snowden with our data, why should we trust the NSA?” and “Have we just built all of the infrastructure a tyrant could possibly want?”
Obama’s part in all this (given that Presidents usually have to operate at a distance and only set out grand strategic objectives rather than specific tactical choices) seems to have been to buy into the Big Data mindset when it comes to the issue of terrorism. Instead of trying to figure out why terrorism exists, and to eradicate its causes (which seemed to be his promise during his first presidential campaign), he has adopted the institutionally generated strategem of trying to figure out what data there is to help find and eradicate the terrorists that currently exist. This was the same flawed logic of the Bush Administration, which galvanized the Middle East, making the new leaders that pop up in the places of the old even more extreme and dangerous. We’re contributing to the evolution of terrorism by forcing them to undergo survival of the fittest.
Obama’s biggest fault may be that he isn’t the philosopher king we foolishly thought he was.
And while the news media has done a surprisingly admirable job expressing outrage at lack of Obama-promised transparency and the vastness of the government intrusiveness and surveillance, they haven’t done a very good job building a case of why any of this matters to Americans. Sure, observers have wryly mocked the name of the program, PRISM, as well as the portal by which analysts access these vast troves of data, Boundless Informant, and its fittingly cartoonish and evil objectives. I’m sure many Americans see this as another opportunity to hate on Obama, regardless of what they think about the actual program. And some have even outlined the serious injuries Americans have sustained to any shred of privacy they once held (e.g., the fact that you can basically be criminally prosecuted at any time given enough information about your internet activities).
However, what the conversation forgets (or at least is content to simply sigh at) is that Americans don’t care about privacy. It has long been established that Americans today would rather trade security for liberty. Americans believe that the fullest extent of counter-terror measures available should be taken, including widespread (read: total) surveillance if necessary. They frankly have no conception of a cost benefit analysis when it comes to terror, which just makes this episode the latest tip of the Leviathan to rear its increasingly ugly head.
When you look at terrorism in light of any other possible subject of comparison, the numbers just don’t add up. In 2001, compared to approximately 3,000 deaths due to terrorism (the absolute peak, of course):
- 71,372 people died of diabetes;
- 29,573 people were killed by guns; and
- 13,290 people were killed in drunk driving accidents.
- roughly 360,000 were killed by guns (actually, the figure the CDC gives is 364,483 — in other words, it’s easy to round off more gun deaths than there were total terrorism deaths); and
- roughly 150,000 were killed in drunk-driving accidents.
So the annual U.S. terror fatality risk for 1970-2007 is about 1 in 35,000,000. Let’s ask the why question now: how much marginal reduction are we getting from mass spying? Is this a sound strategy?
Sometimes we get the anecdotal success stories like the Boston bombers being caught through the use of expansive surveillance, or like how federal agents make arrest in connection to ricin letters by using technology that photographs every piece of mail sent through the USPS. But we’re not really closer to understanding why these crimes happen, let alone honestly dealing with the psychology that leads to mass shootings like Sandy Hook (gee, does anyone even remember that?).
But the point is that we are totally being outplayed on the strategic level by committing to mass spying as a cost-effective solution. Do we attach a breathalyzer to every car steering wheel by the same logic? How about a camera near every pool or home appliance that could electrocute a toddler? These strategies would seem to have far greater returns, but they’re not TERRORISM, so you can forget about it.
As Band-Aids go, Big Data is excellent. But Band-Aids are useless when the patient needs surgery. In that case, trying to use a Band-Aid may result in amputation.
Reporting these kinds of stories is hard, of course. And when you’re trying to deliver news that an audience wants to consume, it makes sense to focus on the theatricality of the spies and spooks rather than complex, philosophical questions about strategy and ephemeral values like privacy.
In the words of Newton Minnow, “Some say the public interest is merely what interests the public. I disagree.” A fitting corollary might be that some say that national security is whatever it takes to secure the nation. I respectfully disagree.
I’ve recently been introduced to Waze, a mapping application for iPhone and Android developed by an Israeli tech startup. It’s a pretty great app, one that leaped several plateaus as a result of the opportunity created by the Apple Maps fiasco that decoupled Google Maps from Apple. Waze was the option adopted by many iOS users post-Google. Perhaps most surprising, even after Google Maps was re-released for iOS, many users stayed with Waze, preferring it to even the new-and-improved Google Maps. Waze has basically done what every tech startup hopes to do. They’re competing with the big boys and have hopes of maybe, just maybe, dethrone the king of the hill.
Waze’s success as a mapping utility lies in the fact that it has cultivated a community that is willing to contribute to the content of the application: they report traffic incidents, mapping inaccuracies, and their own location for added social benefits. And when it comes to tech companies, any actively engaged and devoted community is worth major money.
So, does it come as any surprise that the likes of Apple, Facebook, and even Google have expressed interest in acquiring Waze before it can grow up to be a fully-fledged competitor? Has that set off any bells for anyone remotely aware that this country used to have a policy of vigorous antitrust enforcement? Does it not seem intuitively problematic that the largest player in the maps business might buy one of the few proven potential competitors? Isn’t this what antitrust law is for? What possible pro-competitive rationale could Google advance for simply buying a direct competitor and consolidating the top market share?
Of course, intuition and the law overlap far less often than one would hope. Through the George W. Bush years, the Administration didn’t file a single antitrust case against a dominant firm for violations of antitrust law. And 2001-2009 wasn’t exactly a lull in mergers and acquisitions. Acquisition became a common exit strategy for tech startups. The model was simple: make enough of a splash with your technology that it looks like the inevitable Next Big Thing, sell out to some deep-pocketed public firm, and walk away a happily cashed-out entrepreneur moving on to the next venture. E.g., AOL, MySpace, Skype, Instagram, and now Tumblr. With all of these companies growing themselves up with their eyes fixed on the exit-prize, the Bush Administration self-enforced a rule that made it harder to find anything problematic in a merger or acquisition, because “hey, that’s the beauty of capitalism.”
Certainly, Waze is complicit in this cycle; they’ve almost deliberately ignored the need to create a revenue model that would actually support it as a business. Instead, the plan all along has been to find some company with the deepest pockets that either wants the freshest, newest mapping application on the market or very specifically does not want a fresh, new mapping application on the market. For the founders of Waze, who have families and mortgages, how can you blame them for selling their company for a billion dollars?
That re-raises the incredulity of Google’s potential attempts to buy Waze: Google has no legitimate need for what Waze has developed (other than their user base, maybe). Google already has all the technology that Waze has to offer and more. Google is already installed on the vast majority of smartphones, and can track and correlate user data better than any firm on the market. Instead, the far more likely rationale for buying Waze would be to prevent some other competitor from offering a package that could compete with Google on some other level, e.g., smartphone, operating system, search platform, etc. In “platform analysis” terms, this is known as “vertical leveraging.”
In brief, you can think of just about anything as a platform in that content gets delivered to consumers through various pipes and competitors and any way a user interacts with the stream of content is a platform. So, a TV is a platform, an internet browser is a platform, a streaming video service is a platform, and a cell phone is a platform. Everything is really a platform. Now, whenever one or a few players dominate all possible choices at any stage of the process, they can create a bottleneck according to standard microeconomic behavior when there are few competitors in a market. Essentially, those players get a higher-than-efficient price or can otherwise constrain consumer choice to their own products (where they may get another subsidy to the company’s overall bottom line). For example, Comcast as a local cable internet monopoly may prefer to limit the users’ ability to stream Netflix at top speeds in order to make its own cable TV packages look relatively more attractive. By the same token, Comcast may hold out on licensing NBC shows to Netflix because Comcast wants people to think of Netflix as an incomplete substitute for what they’d get with Comcast TV. Thus, Comcast has effectively “vertically leveraged” its monopoly as the only high-speed internet service provider in town to subsidize its other businesses in a way that would never fly if those other markets were straight-up competitive. That’s vertical leveraging.
So, to see a political-economic climate that encourages consolidation of firms and power as “the beauty of capitalism” is a near-sighted view of the picture (or maybe just depends on the eye of the beholder). From my own vantage point, I get slightly more concerned about the effects of monopolies and oligopolies on consumer welfare than I would be in a more competitive market. When firms are simply competing by buying each other out, consumers lose out by losing both the ability to control the terms and conditions of how they are able to consume and redistribute content as well as the competitive spirit that would have instigated innovation in a competitive market. And after all, isn’t beating out the next guy on the product or price level the whole presumptive motive for a firm to innovate when all they are induced by is a profit-maximizing motive? Isn’t this what led to too-big-to-fail banks when they consolidated so many assets with values contingent on them remaining stable that they can hold the economy at knife-point? And when a company like Google buys the services you like, those same monopolistic control you can’t negotiate with will just spread. Google doesn’t have any reason to respect its users anymore, if Google’s streamlined-to-the-point-of-illegality privacy policies are any indication.
So that’s why I’m willing to go out on a limb and guess (without knowing all of the specific details of possible software synergies or other benefits from vertical integration with a more major player in the space) that from a social standpoint, the public would probably be better served by Waze remaining an independent and competitive firm than by being bought out by Google or by Facebook (though less clearly bad in Facebook’s case, since Facebook might not already have all that Waze has to offer and more technologically speaking). Of course, Waze would have to find the profit model that it has been deliberately ignoring first, which might mean that Waze simply couldn’t exist if it weren’t able to simply sell out. To many, that new product might justify any social costs of giving a buyer like Google even more power and access to data. However, I think there is still the concern of the deceit that must have been perpetrated in getting people to use a product like Waze in the first place: the calculus (specifically) didn’t include the possibility that one’s Waze data would be integrated into the all-knowing Google profile. And if people did realize that, maybe they would have acted differently or preferred to pay for another application with another business model.
Even apart from Waze specifically, I think we all need to be interested in fostering a healthier marketplace for competing technology companies that encourages bucking the normal trend toward consolidation of both power and services. That consolidation leads to oligopolistic results, where the firms don’t make choices that benefit consumers so much as they create deadweight welfare loss by artificially constraining choice and the ability of competitors to offer different goods and services. Antitrust litigation and enforcement would go a long way to keeping markets healthy. See, e.g., the FTC’s intervention and prevention of AT&T’s acquisition of T-Mobile. It would just be nice if it wasn’t a once-in-a-decade thing. Then again, it would be even nicer if the space had some ethics or pride that came along with remaining a profitable and independent startup so that they could self-enforce the principles of antitrust.
Regardless, at present, Waze has reached a major fork in the road: to sell out or to stay independent and create its own path. And I’m guessing, to my own dismay, that they will follow their own product’s advice and follow the surefire paths blazed before them.
The tragic bombing at the Boston Marathon yesterday was undoubtedly surreal to anyone who followed the events as they unfolded, myself included. For the first time that I can recall since September of 2001, time seemed to slow down and attention seemed dragged to the center of a swirling vortex of sorrow, fear, and anxiety in Northeastern America.
The human reaction to the Boston bombing had many echoes of 9/11. Some Islamophobes immediately suspected and accused Muslim religious extremists; others spouted unsubstantiated conspiracy theories that the federal government was behind the bombings; and still others leapt to score cheap political points. People fixated on the negative, to be sure: 3 dead and over 120 injured. Fears and suspicions of broader terroristic plots have been raised and doubted.
But, as with 9/11, the overwhelming response seemed to be one of solidarity. People immediately focused on treating and organizing the wounded and bewildered–what a weirdly ironic blessing and/or oversight of the bomber(s) that the bombs went off in a location where hundreds of medical personnel and ambulances already were waiting to treat potential injuries incurred at the marathon!–and police already were onsite to cordon off the area and preserve the scene for evidence gathering.
And again, like 9/11, people from across the country and across the world came together to show support. Reddit, my favorite site on the internet, discharged its self-appointed duties with distinguishable honor. Redditors compiled, verified, and posted in real-time eight pages of updates, a summary of events, collected offers from Bostonians offering stranded marathoners ground transportation, airline flights, places to stay for the night, and even free pizza.
As Patton Oswalt, a prolific purveyor of perspective, wrote in an online piece, the response of almost literally everyone else was to help and heal, not to harm.
But here’s what I DO know. If it’s one person or a HUNDRED people, that number is not even a fraction of a fraction of a fraction of a percent of the population on this planet. You watch the videos of the carnage and there are people running TOWARDS the destruction to help out. (Thanks FAKE Gallery founder and owner Paul Kozlowski for pointing this out to me). This is a giant planet and we’re lucky to live on it but there are prices and penalties incurred for the daily miracle of existence. One of them is, every once in awhile, the wiring of a tiny sliver of the species gets snarled and they’re pointed towards darkness.
But the vast majority stands against that darkness and, like white blood cells attacking a virus, they dilute and weaken and eventually wash away the evil doers and, more importantly, the damage they wreak. This is beyond religion or creed or nation. We would not be here if humanity were inherently evil. We’d have eaten ourselves alive long ago.
The immediate legacy of 9/11 was to create a broad basis of support for a president that was already divisive; now we are in dramatically overpolarized political climate in dire need of some kind of common ground. Hopefully, one legacy of this atrocity will be a move back toward some sort of consensus that we are all Americans in it for the common good regardless of political party.
Unfortunately, there likely will be other sorts of echoes of 9/11 in the form of crass attempts at commercializing a tragedy. Your guess is as good as mine as to what color the donation ribbon/wristband/marathon bib will be. Where Boston will hopefully begin to really diverge from 9/11 is how America reacts to these acts of terrorism.
Aside from the bombings themselves, perhaps the biggest story that is apparent from the survey of coverage of the Boston bombing is the absolutely revolutionized technological landscape that now surrounds us as opposed to 11.5 years ago. The state investigation (as opposed to public, a significant difference in this case) may have simply proceeded with the more conventional 9/11-era tools such as bomb fragment analysis, call tracing, purchase tracking, etc. But what everyone already knows is that investigators will heavily rely on crowdsourced public surveillance. Surveillance cameras are distributed and located in every pocket and on every corner, making the public a far more efficient form of surveillance than any of the state’s tools of surveillance.
There are limits to the crowdsourcing. The data used in the investigation will be crowdsourced. The investigation will not be. A crowdsourced investigation runs a high risk of becoming a witchhunt, as we saw in the Newton shooting spree.
Hopefully, the vast comparative efficiency of crowdsourcing surveillance and intelligence to the clunky state methods of intelligence-gathering will prove itself as a source of American resilience in the aftermath of this tragedy. Even more hopefully, Boston will prove that the authorities can ask for information rather than demanding or simply taking the photos off of phones “for national security reasons.” And who knows, maybe this method of law enforcement will make Americans think twice about the decade of fruitless torture we have implicitly authorized through inaction.
This was a lesson we could have learned from 9/11 itself. After all, Flight 93 (the one headed for the Capitol) wasn’t brought down by the TSA or even the Air Force; it was brought down by American citizens on the plane who had unfettered access to crowd-sourced intel (in that case, cell phone calls to/from loved ones who had seen what happened to the World Trade Center). Of course, crowd-sourcing intel will undoubtedly lead to a lot of false positives, and it already has, but which is worse: some racist paranoids calling out turban wearers (which they already do), or giving the national security leviathan the power to control for everything all the way down to pressure cookers?
Although, if 9/11 is our historical guide, I’ll give you one guess as to the direction that our public approach to civil liberties will take.
“They can give me a cavity search right now and I’d be perfectly happy,” said Daniel Wood, a video producer from New York City who was waiting for a train.
It has been said that you can understand a culture by examining their founding mythology. What does that say when we find that our heroes are all the same across cultures? Does it mean, in the words of most fictional villains, “We’re not so different?”
Joseph Campbell, a name that might be familiar to high schoolers, argued in The Hero With a Thousand Faces that our shared stories say quite a lot about us. Campbell points out that despite the cosmetic differences between nations and cultures, there are more fundamental stories we will always share. The fact that those stories have any kind of resonance speaks to our innermost and unchangeable feelings and ways of seeing the world. Because we share our common foundation as human beings that make us subject to the same biological imperatives and urges, we can come to understand other culture’s essential commonality with ourselves.
An evolutionary psychologist (or someone who’s simply read more Jung than I have) might have more insight into possible explanations for why Campbell’s thesis is true on a neuro-/psychological level, but the point is that there are bedrock experiences that we can relate to across cultures. If you want a more comprehensive version of the story, Campbell and Bill Moyers got together to produce a six-part miniseries called The Power of Myth, which was just released for free online.
Campbell suggests that the fact that archetypal events like the flood myth and characters like Orpheus (whose tale of travels to the underworld to save a loved one appears not only in Ancient Greece but also in Feudal Japan, Sumeria, and Mayan cultures) or Jesus (whose life-death-ressurection heroic antecedents existed in Medieval Europe, Ancient Egypt, Hinduism and Buddhism).
And of course, for those wise enough to slavishly learn the lessons of history, like George Lucas or the Wachowskis, you can make a movie that appeals across all cultures. Which is why people who don’t like Star Wars or The [Original] Matrix probably shouldn’t be trusted.