This is the first chapter of a book I've read just before the summer. I highly, highly encourage anyone who found this article to go read that book (Amazon for the lazy). I really liked it because it explores the intersection between urban planning and technology and Townsend doesn't shy away from criticism.
edit: rob05c, are you aware that what used to be personal tags now doesn't register as two tags? I.e. my post should pop up in #technology as well as #technology.veen #bugski
edit2: accidental shoutout to rd95. My bad.
Yeah, there's enough inaccuracies in the first chapter that I'm a little skeptical. mmmm.... Manhattan Project? Apollo? Yeah sure, smart thermostats are cool and all but the focus is a little... comp sci centric. And the problem pervades the entire discussion: ...that's not a smart toilet. A "smart" toilet would take into account water demand. That's a toto electric flusher. It's a solenoid with a motion sensor in front of it. A system it ain't. That's a crashed signage server. Know why they crash? Because the signage industry died in 2008, when a study came out indicating that people look at their phones too much to notice signage. the DSN industry never made it out of Windows XP, because the features they need didn't exist in Vista. Know why you see BSODs at airports? Because the signage networks are eight years old because nobody makes enough money to keep them updated. They are also not systems... the systems would report back to the mothership to say "hey - I'm dead" and technicians would roll to fix them. That is, until the money dried up and the networks died. This isn't the sign of a "brittle" system, this is the sign of a system persisting eight years after its raison d'etre has expired. Except these were glitches that were resolved in 90 minutes. We're talking about a grand total of 2 hours of unscheduled downtime. In 2006. Nearly ten years ago. Iphones didn't exist back then. Rather than $300 billion over the course of decades, before the problem needed to be addressed. My uncle was a Y2K fixer - the reason it only cost $300 billion is lots of networks updated and upgraded their gear so they wouldn't have to deal with it. The reason we aren't eating dog food and pledging allegiance to The Great Humungous is that the Y2K bug was greatly exaggerated. Author's diagram: L3 network map: ...or maybe it's not a myth? Speaking as someone who spent probably 400 hours doing acoustical analyses on backup generators on cell towers in the middle of bumblefuck nowhere post 9/11 as part of King County's 911 response plan, I can say that the above description is abject horse shit. There's a reason emergency response services bolstered the hardness of cellular networks in response to September 11 - they're virtually impossible to kill through nefarious acts or accident, and super easy to kill via civil mandate. Wanna kick all the citizens off of T-Mobile and give it over to EMS? Flip a switch. Lost 35 towers? Good thing you have 900 of them. Those of us who live near the beach have at least two friends with "micro cells" which extend the cellular network over their own residential internet uplink. Cellular networks are rock hard. Yeah, I know the guy that did that. He works for a major studio. They have one show that is largely watched online, and they switched from AWS to a cheaper provider that assured them it could handle surge. It couldn't. The failover took down AWS. And what did we lose? Netflix. Life safety this ain't. 1) Not without a nuke. 2) I mean, seriously. Short of space burst nuclear weapons the GPS grid isn't going down. 3) And by the way, the first gen iPhones didn't have GPS. they used cell tower triangulation. This is still the principal localization system used by your phone. GPS is a backup. 4) And by the way, considering how reliant the military is on GPS, chances are good that their network is hardened. ...except the only devices affected by the infection were a specific control system from a specific manufacturer. This is akin to crafting a virus that will only give Donald Trump any symptoms and then arguing it isn't a targeted strike. _____________________________________ The article strikes me as uninformed fear-mongering.A sleepy suburb at night, by day Palo Alto becomes the beating heart of Silicon Valley, the monied epicenter of the greatest gathering of scientific and engineering talent in the history of human civilization.
Despite the pedigree of its clientele, the smart toilet doesn’t work.
And then, out of the corner of my eye, I saw it. The Blue Screen of Death, as the alert displayed by Microsoft Windows following an operating-system crash is colloquially known. Forlorn, I looked through the glass at the lone panel. Instead of the stream of genetic discoveries, a meaningless string of hexadecimals stared back, indicating precisely where, deep in the core of some CPU, a lone miscomputation had occurred.
“BART staff began immediately working to configure a backup system that would enable a faster recovery from any future software failure.” But two days after the first failure, “work on that backup system inadvertently contributed to the failure of a piece of hardware that, in turn, created the longest delay.” 7 Thankfully, no one was injured by these subway shutdowns, but their economic impact was likely enormous — the economic toll of the two-and-a-half-day shutdown of New York’s subways during a 2005 strike was estimated at $1 billion.
Stemming from a trick commonly used to save memory in the early days of computing, by recording dates using only the last two digits of the year, Y2K was the biggest bug in history, prompting a worldwide effort to rewrite millions of lines of code in the late 1990s. Over the decades, there were plenty of opportunities to undo Y2K, but thousands of organizations chose to postpone the fix, which ended up costing over $300 billion worldwide when they finally got around to it.
We also like to think that the Internet is still widely distributed as Baran envisioned, when in fact it’s perhaps the most centralized communications network ever built.
Despite the existence of many chokepoints, the Internet’s nuke-proof design creation myth has only been strengthened by the fact that the few times it has actually been bombed, it has proven surprisingly resilient.
Cellular networks fail in all kinds of ugly ways during crises; damage to towers (15 were destroyed around the World Trade Center on 9/11 alone), destruction of the “backhaul” fiber-optic line that links the tower into the grid (many more), and power loss (most towers have just four hours of battery backup)
Amazon Web Services, the 800-pound gorilla of public clouds that powers thousands of popular websites, experienced a major disruption in April 2011, lasting three days.
Another “cloud” literally floating in the sky above us, the Global Positioning System satellite network, is perhaps the greatest single point of failure for smart cities.
The wide spread of Stuxnet was shocking. Unlike the laser-guided, bunker-busting smart bombs that would have been used in a conventional strike on the Natanz plant, Stuxnet attacked with all the precision of carpet bombing.
I noticed some of the inaccuracies in his examples, but not all of them because I don't know enough about the technical details like you do. Thanks for pointing them out. The bigger argument still holds, I think: There are a lot of problems with the smart city that are not being addressed by IBM or by Cisco, for example. While the impact now is not much more than infoscreens crashing or public transport disruption, there will be bigger systmes with bigger problems if we decide to 'smarten up' our cities. Especially when data-driven decision making becomes more and more central to the functioning of local governments. What happens when 'smart' public services crash, are inaccurate or just misguided?I’ve also seen my share of gaps, shortfalls, and misguided assumptions in the visions and initiatives that have been carried forth under the banner of smart cities.
I think it's premature to discuss shortfalls when the technology and implementation is still at the "scenario" level of development. It's like that whole bullshit "internet of things" monicker, which GE just calls "telemetry." "Internet of things?" Ooh! Aaah! Fancy! Unknown! Mystical! Subject to change without notice! Telemetry? A hundred years old and not something you can sell to Wall Street.