Wednesday, 17 July 2013

Can fracking cause bigger, more frequent earthquakes?

Injecting fluids into the Earth, whether to recover natural gas or to obtain thermal energy from the planet, can cause earthquakes. New reports that look at American fracking, deep waste-water injection, and geothermal activities suggest there are big risks and thus a need to develop strong regulatory framework to deal with them.

The most striking indication of human-induced earthquakes is provided by the graph below, which shows the cumulative number of earthquakes in the central and eastern US that were greater than or equal to magnitude 3.0 on the Richter scale. The clear increase from 2005 coincides with the rapid increase of shale gas wells and associated increased, deep waste-water injection. Between 2005 and 2012, the shale gas industry in the US grew by 45 percent each year.

Three reports have been published this month in Science that add to our limited but growing data on the causal link between fluid injections and earthquakes.

Pumping fluids into the ground changes how ground water travels through the porous rock systems. This can affect the rock in two ways. First, these injected fluids can leak directly into faults, which are fractures in rocks, and change pore pressure along the fault, causing failure. Second, they can alter the mass or volume of the rocks that overlie a fault, which changes how much loading the layers underneath have to bear. In this case the fluids do not interact with the fault directly, but cause it to fail by changing its surroundings.

In the first report, a review article, William Ellsworth of the US Geological Survey points out that earthquakes are occurring in unusual locations in North America and Europe. He looks at activities where injecting fluids into the ground may cause earthquakes—such as mining for minerals and coal, oil and gas exploration/production, as well as the building of reservoirs and large waste-water disposal sites. Ellsworth examines three case studies of deep injection which are particularly convincing.

In 1961, fluid was being injected to a depth of 3.6 km at a Colorado chemical plant for disposing hazardous chemicals. By early 1962, nearby residents started reporting earthquakes. By 1966, 13 such earthquakes had been recorded in that area of magnitude 4.0 or more.

In 1969, the US Geological Survey also started injecting fluids at another site in Colorado. This time, their aim was to understand how fluid pressure could influence earthquakes. They noticed that whenever the fluid pressure went beyond a critical threshold, more earthquakes were observed. This indicated that earthquakes could potentially be controlled if the pressure at which fluids are injected is controlled properly.

The most remarkable example, however, comes from injections in Paradox Valley in Colorado (which are still ongoing). In that area, during the period of 1985 to 1996, only three tectonic (natural) earthquakes were recorded within 15km of the site. Between 1991 and 1995, when the injection tests were conducted, hundreds of induced earthquakes were detected within 1km of the site, while few were detected beyond 3km from the site; all were below magnitude 3.0. This situation, however, changed with continuous injection activities. In 2000, there was an earthquake of the magnitude 4.3 recorded 8km from the site, while earlier this year there was one of magnitude 3.9 recorded roughly the same distance away. This indicates that long term injection can lead to expansion of the seismically active area and trigger bigger earthquakes.

In the second report, Emily Brodsky and Lia Lajoie of the University of California at Santa Cruz looked at the Salton Sea Geothermal Field in California. They tracked the total volume of fluid injected and extracted to extract heat from the Earth’s core. The team found it correlates to the number and magnitude of earthquakes. So it’s not just injection, but also extraction that needs to be given attention.

The last report is by Nicholas van der Elst of Cornell University and his colleagues. This study tracked induced earthquakes that are triggered by much larger, natural earthquakes that occur far away. Injection of water in the deep ground elevates pore pressures and makes the faults and fracture networks in the rocks more vulnerable. With that, a distant event can push the system over the edge and cause earthquakes around the injection sites.

The fact that human activity alters the landscape in a way that can cause earthquakes is not surprising given the scale of activities. The mud volcano “Lusi” (Sidoarjo mud flow) in Indonesia is a striking example of what can happen when drilling interacts with the subsurface, and debate lingers as to the “human trigger” for the event. At Lusi, a mud volcano erupted shortly after drilling for gas, and as the disaster developed, it covered the surrounding houses, displacing some 13,000 families and closing 30 factories and hundreds of small businesses.

Humans are aware of the fact that earthquakes can be induced by fluid injection, and we are learning more about how injections cause them. So perhaps, as Ellsworth concludes, it's high time to intervene with a clear regulatory framework to mitigate the risk.The Conversation

Science, 2013. DOI: 10.1126/science.1225942, 10.1126/science.1239213 and 10.1126/science.1238948 (About DOIs).

Dougal Jerram is Professor II at University of Oslo. This article was first published at The Conversation.

This story has been updated to clarify a paragraph.

Listing image by Flickr user: danielfoster437


View the original article here

Volcanic earthquakes produce a “seismic scream” just before eruption

The Redoubt Volcano, after a 1990 eruption melted most of a glacier off its southern face.

Volcanic activity is intimately associated with seismic activity. You simply can't force molten or semi-molten rock through a mountain without cracking a few faults in the process. If we were ever able to understand how to read the seismic activity correctly, it could provide valuable advanced warning about impending eruptions.

A 2009 eruption of Alaska's Redoubt Volcano may not get us much closer to an advanced warning, but it provides a detailed glimpse of the last moments before an explosive eruption. Shortly before the eruption, small faults within the volcano were breaking so frequently that they merged into what's being called a "seismic scream." Then, within a few minutes of the eruption, the scream got cut off as the last resistance gave way.

Redoubt is a stratovolcano, built from material that melted as the Pacific plate subducted beneath Alaska. Like some more famous examples, such as Mount St. Helens, it alternates between slow eruptions of extremely viscous rock and sudden, explosive ones. The 2009 eruption was accompanied by a number of small explosions (small at least in the sense that the mountain was still there afterward); the researchers focused on the seismic activity that led up to these explosions.

Most of the earthquakes associated with the eruption were small (between magnitude 0.5 and 1.5) and centered a few kilometers below the volcanic vent. There was plenty of activity of this sort seen during the eruption, but something unusual happened before the largest explosion: "These small earthquakes occurred in such rapid succession—up to 30 events per second—that distinct seismic wave arrivals blurred into continuous, high-frequency tremor." This continuous tremor is what is being called the "seismic scream."

The earthquakes themselves might be enough to make you nervous, but something even more unnerving happened after a few minutes of screaming: things suddenly went quiet. For somewhere between 30 seconds to a minute, the low magnitude quakes stopped, although sometimes larger ones would happen. And then, the explosion hit.

The authors extrapolated the scales of known earthquakes down to something of this magnitude and came up with the rupture of small faults, about 20m long, that are only sliding a millimeter with each event. To understand the forces involved, the authors built a model of the internal faulting at the volcano and got it to reproduce the behavior seen on the seismograph.

As they gradually increased the pressure on the faults, the authors' model responded with more frequent earthquakes, with the frequency slowly ramping up through frequent earthquakes before reaching the seismic scream phase at stress rates of about five MegaPascals a second. At that point, the faults move steadily but alternate with sticking and sudden slips. As the stress rate reaches 20MP/s, the sticking stops, and the fault goes into a smooth glide.

The authors consider that value, 20MP/s, unexpectedly high and don't seem to want to go into what might require that much force to shift. "It is beyond the scope of this study to rigorously evaluate potential [magma] conduit processes responsible for such extreme loading conditions." But then they go ahead and do so anyway, suggesting that the magma is forcing an obstructing piece of rock against the walls of the conduit to the surface. All told, the obstructions seem to move only about five meters, after which the way is clear for the material to explode to the surface.

Nature Geoscience, 2013. DOI: 10.1038/NGEO1879  (About DOIs).


View the original article here

Manifesto: Let my upload bandwidth flow!

Consumer broadband connections in the US are almost all "asymmetric" connections—that is, out of the total amount of bandwidth available, more bandwidth is allocated to the "download" direction than to the "upload" direction. This decision made sense 15 years ago when DSL connections were first gaining momentum. The Internet—and specifically the World Wide Web—was far more of a consumption-oriented construct then. People were far more interested in reading or watching content than in putting up their own. We wanted, needed, fast download speeds, and broadband providers jumped at the chance to differentiate themselves from dial-up ISPs by offering fast always-on connections and by using as much of that bandwidth as possible to send data to users.

The story today is very, very different. Download speeds are still important (by some estimates, just a bit under half of all Internet traffic is from people watching Netflix and YouTube videos), but it's become far easier to create content too. The ability to actually share anything that you've created relies on being able to upload that content.

Slow upload speeds are a problem even my mother has commented on—and when my mother starts commenting on a technical issue, that's when I know that it's absolutely a mainstream concern. She enjoys making videos of things she's painted and of new plants in the backyard garden, then uploading those videos to YouTube to share with her friends. But she's stymied by how long it takes to upload her videos, even if they're relatively short. She and my father are trapped by Comcast into an overly expensive residential cable modem plan with a grossly asymmetric download/upload ratio. Explaining the problem to her yielded the common sense observation, "Well, that's just stupid. How am I supposed to share videos if it takes longer to get them to YouTube than it does to film them in the first place?"

It gets worse. Near-ubiquitous cloud service provider Dropbox announced last week that it is introducing a set of APIs to allow applications and services to seamlessly sync across devices; their eventual goal is to supplant local storage by making a user's entire "data footprint" remote and cloud-based. This is great news, but whenever I read about a new feature like this it rings hollow to me. The real issue that must first be addressed is how much additional upload bandwidth this type of thing will require—and how disappointed am I going to be when I actually use an app that implements it? It's like the first time a new cloud-backup customer sits down and realizes that while backing up all of their hundreds of gigabytes in the cloud sounds like a great idea, it's gonna take a month to actually upload everything the first time.

Thus, when I'm choosing a broadband plan, the first thing I look for isn't the download speed, the provider, or really even the price—the first thing I look for is the upload speed. More than any other factor, the upload speed of a broadband connection determines its desirability to me. And it should for you, too.

I am in love with the idea of total data mobility. I've got terabytes of video and audio and other personal data on my home NAS, and it's configured so that I can access it via the Internet. In theory, I'm already ahead of Dropbox's vision—my data can already "follow" me, and I shouldn't need USB sticks or DVDs if I need to watch a movie while I'm on vacation. In practice, though, it doesn't work that way: the impossible-to-overcome barrier of the teeny-tiny soda straw of upload bandwidth I'm allotted makes it impractical to actually use my media remotely. I can stream a video, but often big high-definition files are impossible to access, at least not without constant hiccupping. I can copy archived applications to reinstall on my laptop if I'm remote but only at very slow speeds.

It's easy to solve the technical issues involved in remote access—the mechanisms of securing the content and making it easy for me to access and difficult for outsiders. The issue is one of bandwidth, since my upload bandwidth at home is the speed at which I can download from my servers when I'm remote. It's extremely frustrating to be so close to the ideal fully mobile situation I want to be in but to have no good way to actually cross that final step to fully realize it. And if I want to share something with friends, it's ridiculous that in 2013 we still have to resort to sneakernet and USB sticks. The situation is no better than it was back in junior high when we traded data on floppies because our 2400 bps modems were too slow.

The idea of total personal data mobility isn't something that the average user has much exposure to yet, because the idea is so far-fetched with current US home broadband plans that it might as well be science fiction. My mom, with her frustration about uploading YouTube videos, doesn't really need to share her videos straight off her own computer—but that's in part because US broadband consumers are on the wrong end of a broadband cartel-constructed chicken-and-egg problem. Why would we need tons of upload bandwidth to share our photos and movies and all of the other things that make up our digital lives when other services already exist to fill those needs? Because YouTube and those other services don't fill those needs—not nearly well enough, anyway. With constrained upload pipes like most folks have, the wide-spread distribution of user-created content relies on the assistance of broadband cartel-friendly helpers like YouTube—sites which degrade the things people have created by sandwiching them between unwanted advertising, and which remove clearly non-infringing content at the drop of a perjurious hat. User-generated content production will benefit from speedier distribution—the noise level will certainly rise as more folks post movies of worthless things, but the signal ratio is guaranteed to rise with the noise.

If they built it, we would use it. Huge Internet names have sprung into being to serve the content that we can't serve ourselves—more accurately, to monetize that content. If we could serve our own stuff, we wouldn't have to license away our content and let YouTube slap advertisements all over it.

Some US broadband providers do indeed provide adequate upload speeds. FiOS, for example, has a relatively affordable symmetrical 20Mbps plan (20Mbps of download bandwidth, 20Mbps of upload bandwidth) in my area. But unfortunately, although FiOS is available in my area, it isn't available on my street (and, according to Verizon, it never will be).

Since I'm on a Comcast business-class connection, the next tier up from my current tier of 16Mbps down/3Mbps up only bumps my upload speed to 5Mbps—but it adds another $40 per month onto my bill. I know I said that upload speeds trump price, but there are limits. It doesn't trump price quite enough for me to justify almost $500 more per year for a measly additional 2Mbps.

The "consumer" mindset, the idea that the average Internet user should be fed pre-approved content produced by pre-approved sources but not generate any content on their own, is outdated and simply wrong. In a world of sanely-allocated symmetric broadband plans, YouTube would be peripheral to the main distribution and sharing sources—their enormous distributed bandwidth would still be required for Big Content's movies and for "viral" videos, but for sharing that video of your garden or of your kids dancing around? You could host it yourself, directly, free of interference.

I'd vote with my dollars for more upload speed, but since I'm trapped in a duopoly (Comcast cable vs. Verizon's ludicrously outdated, ridiculously slow DSL service) I really don't have any vote at all.

We have things to say and we have data to share. Give us more upload bandwidth.


View the original article here

Canadian “patent troll” Wi-Lan loses East Texas trial

An Ottawa-based patent-licensing firm named Wi-Lan is one of several patent licensing operations that claims to own patents relating to wireless Internet. Wi-Lan filed a lawsuit against 22 companies over Wi-Fi back in 2007. In 2010, the firm went to East Texas to sue others, claiming it owned patents critical to the data transmission standards in mobile phones. Later that year, it also sued anyone who makes cable modems.

Most patent cases settle, but a group of defendants in a Wi-Lan mobile phone case saw it through to trial, resulting in a defense win against Wi-Lan that had immediate financial results for the company, which trades on NASDAQ as WILN. The patent-holding company lost about a third of its stock value after the verdict was announced Monday afternoon, but it has since made a partial recovery.

The defendants at the six-day trial were Alcatel-Lucent, Ericsson, HTC, and Sony; LG Electronics was also sued but the docket shows LG settled with Wi-Lan in 2010. Court records indicate that the jury took just about one hour to decide the case. The verdict form shows that none of Wi-Lan's patents in this case were found to be infringed, and three of them were found to be invalid because they were anticipated by earlier technology or were just obvious.

“HTC believes that Wi-Lan has exaggerated the scope of its patent in order to extract unwarranted licensing royalties from entities who have been focused on bringing innovation forward in their own products,” a spokeswoman for Taiwan-based HTC told Bloomberg. "We think this validates our belief that Wi-Lan was stretching the boundaries of its patents, and the jury confirmed that belief,” said an Alcatel-Lucent spokesman.

Alcatel-Lucent is a company that has been on both sides of East Texas patent lawsuits. While it beat a "patent troll" in this case, the French telecom has also sued an array of US retailers in the same patent-happy district using patents it acquired from Bell Labs. Alcatel lost that case on appeal earlier this year, and its pursuit of a patent case in a market it didn't compete in earned it the moniker of "corporate troll" from victorious defendant Newegg.

Today's loss for Wi-Lan is hardly a wipe-out for the company, which boasts a portfolio of 3,000 patents. Still, one analyst who follows the company lowered his estimate of what these defendant companies would be paying Wi-Lan by a cool $12 million.

Wi-Lan can appeal the loss to the US Court of Appeals for the Federal Circuit and is likely to do so. But non-practicing patent holders haven't been faring well there lately.


View the original article here

Remember Jay-Z’s terrible Android app? Privacy group wants feds to investigate

Sign up for the Ars Technica Dispatch, which delivers links to the most popular articles, journals, and multimedia features via e-mail to your inbox every week.

I understand and agree that registration on or use of this site constitutes agreement to its User Agreement and Privacy Policy.

View the original article here

Hands on with OWA for iPhone, Microsoft’s Outlook for iOS (sort of)

Microsoft has added another piece to its "free" Office package for iPhone users—free as in “free with an Office 365 account,” that is. This time, Microsoft included an Outlook mail client and calendar…sort of. Called OWA (as in Outlook Web App) for iPhone, this app takes the behaviors and interface of the Outlook client on Windows Phone 8 and embeds them in an iOS application formatted for the iPhone. It's similar to what Microsoft did with the Office app released last month.

As its name suggests, OWA for iPhone is not a full-fledged Outlook client in that it’s limited to the single e-mail account associated with an Office 365 account. It does, however, have most of the functionality you’d expect from a phone mail client. It syncs contacts with the iPhone address book, pushes notifications for appointments and new mail, and generally does everything else that the Windows Phone 8 Outlook and Calendar apps do with a somewhat similar interface.

Microsoft has thrown in a few things to subvert the Apple ecosystem besides building the application in an HTML5 clone of its “Metro” interface. For example, when you set the location for a meeting in OWA’s calendar, you can search for the location with Bing Maps and attach the address and map information to the appointment. Other features of the Outlook and full Outlook Web clients, such as automatic creation of appointments based on the contents of e-mails and access to LinkedIn data on the sender of an e-mail, are also part of the OWA for iPhone client.

The OWA client adds an additional layer of security for people who put the app on their personal phone—a mobile PIN. You can add a four-digit PIN code to the app to protect access to your e-mail, allowing you to pass your phone to your bored child or spouse to play Plants vs. Zombies without worrying about exposing them to the horrors of your work life.

The startup screen for OWA for iPhone.

The startup screen for OWA for iPhone.

You must be at least this tall to ride: OWA for iPhone only works with the most up-to-date version of Office 365. If you're still on the pre-Office 2013 edition, you're stuck.

One per customer: OWA for iPhone does not support multiple Outlook accounts. But that's not really a big deal, since OWA is meant to keep your Outlook mail from mingling with the unwashed masses of your personal mail accounts.

There's a brief walk-through with tips on the interface after you first launch the app.

OWA allows you (or requires you, based on security settings) to set up a 4-digit security PIN separate from your iPhone's password.

Once you're logged into the app, it looks a lot like Windows 8—with a flat tile interface and HTML5 animations.

The Inbox looks almost identical to the Windows Phone 8 and Windows 8 "Metro" mail clients.

In messages, OWA can use the same Exchange server-side applications as Outlook 2013, including the built-in LinkedIn tool for pulling up contact data and the "Suggested Meetings" feature.

Tap "Suggested Meetings" and OWA starts building a calendar entry based on the text of the e-mail.

You can then edit the location or search Bing for a matching location to get the actual address.

Pick the location that matches...

...and it's added to the appointment, ready to be added to your calendar and sent to other invitees.

You can set the time for an appointment quickly with Microsoft's answer to the spinning-dial interface—a set of scrolling blocks to set the hour and minute and choose between AM or PM.

There are three calendar views to choose from—an agenda view...

..and a month calendar view, which drills down into daily or agenda view when you tap on days with appointments.

While the editing interface for messages is spartan—perhaps too spartan—tapping on the ellipses at the top-right of the screen pulls down some additional options for messages.

The People interface—OWA's address book—is essentially the same as the Outlook Web interface from the browser.

This is where you can select to embed content from your phone in the message, as well as set and check address fields and importance.

Unfortunately, there's no connection here to SkyDrive or local storage other than photos, so all you can send from the iPhone is images.

Close a message without sending, and you can save as a draft or delete.

Listing image by Sean Gallagher

Expand full story

The only Utah ISP (and one of the few nationwide) standing up for user privacy

Pete Ashdown is the founder and CEO of XMission, based in Utah.On May 29, 2013, Pete Ashdown received a two-page document from the United States Department of Justice Criminal Division at the United States Embassy in Rome, Italy. Ashdown is the founder and CEO of XMission, an independent ISP and Web host based in Salt Lake City, Utah.

You are hereby requested to preserve, under the provisions of Title 18, United States Code, Section 2703(f), the following records in your custody or control, including records stored on backup media:

A. All stored electronic communications and other files associated with the following IP address: 166.70.270.2

There are two minor problems with this request. First, it’s not a valid IP address. Second, the IP address it’s supposed to be is actually that of XMission’s Tor node (166.70.207.2).

“So not only did they not bother to investigate the fact that it was Tor node, but they didn’t know what a proper IP address was either,” Ashdown told Ars.

Not all legal requests are as malformed as this one. The local entrepreneur's company, XMission, is one of the few ISPs and hosts in the United States that seems to make a point of standing up for the user privacy of its 30,000 customers (California's Sonic.net is often noted as one of the others). When Ashdown gets requests to preserve or hand over data, he checks to see that the requests are accurately written. If so, he tells law enforcement to come back with a probable cause-driven warrant—at which point they never do. In the age of new disclosures about what government agencies are finding out about all of us, such a defiant stance is worth noting.

“My guess is that it's just laziness,” he said. “It's so much easier to fire off a boilerplate subpoena than have to go to a court and defend probable cause and get a court to sign off on it.”

Ashdown said that since 1997 (XMission began in 1993), he’s gotten requests like these, but they’ve been “ramping up” in recent years. These days it's “one to two every quarter”—a cumulative total of “upwards of 100.”

“My view of patriotism is that you question your government,” he added. “You question law enforcement. You question everyone who may be trying to peer into your life. I believe that the 4th Amendment makes that very clear. It should be questioned and it should be balanced against a court asking the same questions. I have a history in my family. My mother was in Denmark when the Nazis overran it in a couple of days. She always had distaste for authority and people telling her what she needed to do. That's not my vision of freedom in the US, that somebody can peer into our communications or save all of our communications for later.”

Ashdown says that 75 percent of the requests are from Utah-based law enforcement, 15 percent come from federal authorities, and 10 percent are out of jurisdiction.

“The actual warrants I’ve received are probably less than half a dozen,” he said. “I've never received a follow-up warrant to subpoenas that were not signed off by a court. I've never received an FBI follow-up (to my knowledge) to an out-of-jurisdiction request as well.”

Ashdown has a history in local politics. Twice he’s run for the United States Senate, and twice he’s been defeated in the primary. But he clearly has strong views about data protection, privacy, and how the United States Constitution should be applied in the digital age.

“It seems pretty obvious to me that the data that is retained within my business is, in effect, my papers and effects and is covered under the 4th Amendment of the US Constitution and Article 1, Section 14 of the Utah Constitution,” he added.

As far as local media can tell, that makes Ashdown likely the only ISP in the Beehive State that puts the brakes on such investigations. Nearly 99.9 percent of others comply according to Craig Barlow, chief of children’s justice in the Utah attorney general’s office (speaking to the Salt Lake Tribune).

Legal experts say that there's little market pressure to compel Ashdown (and his larger competitors) to take such a position. After all, they don't want to be seen as being soft on crime. Furthermore, even if the administrative subpoenas are challenged, companies can lose.

"I know Twitter challenged one in regard to Occupy and eventually lost," Amie Stepanovich, director of the domestic surveillance project at the Electronic Privacy Information Center, told Ars. "The further problem is that there is a really high standard to challenge these orders—institutional bad faith. The Supreme Court has indicated that the countervailing interests at issue are the public's interest in order and the target's interest in absolute privacy. It should be noted that the public's interest in 'order' is referenced but not the public's interest in lively public debate or the ability to communicate anonymously, both First Amendment considerations."

Despite his newly publicized privacy stance, Ashdown hasn’t seen a huge uptick in new customers—since being written up on Russia Today, for example, he said he has only gotten 10 new customers. Nonetheless, XMission does pretty well for itself. Ashdown says that he takes in $7 million per year in revenue, and 80 percent of that is from hosting and colocation.

“It's not worth it to me to sell out to the government—[other ISPs] are all making money off of these taps,” he said. “It's not worth it to me to sell out to the government because that's not the country I live in. That's not what I want to see the Internet used for. I’m not saying that criminals should be free to do whatever they want, but just that those investigations should be specific. If the investigative bodies say that we need to monitor this specific IP address, I will help them with that process. What I will not do is to give them metadata or broad monitoring of all the traffic that is passing over my network because I don't believe that's necessary to protect the safety of Americans. I don't believe it's constitutional either.”

In the end, Ashdown thinks too many American tech companies have complied with such legal requests. His best guess for a reason why? These moves come from the advice of conservative-minded counsel and corporate boards.

“There are lots of small ISPs in this country that are struggling to get by in the face of the behemoths, and the small ISPs are much more likely to protect their privacy because they make the decision,” he said. “It's one person making the decision and a small ISP is not answering to a board of directors or stockholders. It's probably still rare that there's this small ISP run by one person, but I know they're out there. I've heard from them.”


View the original article here