Here’s an interesting article on earthquake prediction. Basically, it has eluded scientists so far and will probably continue to do so.
Since the early 20th century, scientists have known that large quakes often cluster in time and space: 99 percent of them occur along well-mapped boundaries between plates in Earth’s crust and, in geological time, repeat almost like clockwork. But after decades of failed experiments, most seismologists came to believe that forecasting earthquakes in human time—on the scale of dropping the kids off at school or planning a vacation—was about as scientific as astrology. By the early 1990s, prediction research had disappeared as a line item in the USGS’s budget. “We got burned enough back in the 70s and 80s that nobody wants to be too optimistic about the possibility now,” says Terry Tullis, a career seismologist and chair of the National Earthquake Prediction Evaluation Council (NEPEC), which advises the USGS.
Defying the skeptics, however, a small cadre of researchers have held onto the faith that, with the right detectors and computational tools, it will be possible to predict earthquakes with the same precision and confidence we do just about any other extreme natural event, including floods, hurricanes, and tornadoes. The USGS may have simply given up too soon. After all, the believers point out, advances in sensor design and data analysis could allow for the detection of subtle precursors that seismologists working a few decades ago might have missed.
And the stakes couldn’t be higher. The three biggest natural disasters in human history, measured in dollars and cents, have all been earthquakes, and there’s a good chance the next one will be too. According to the USGS, a magnitude 7.8 quake along Southern California’s volatile San Andreas fault would result in 1,800 deaths and a clean-up bill of more than $210 billion—tens of billions of dollars more than the cost of Hurricane Katrina and the Deepwater Horizon oil spill combined.