Category Archives: Peer Reviewed Article Review

agent based modeling

Agent-based modeling is one of those things I want to play around with some day if I theoretically ever had some time.

Modelling domestic water demand: An agent based approach

The urban water system is a complex adaptive system consisting of technical, environmental and social components which interact with each other through time. As such, its investigation requires tools able to model the complete socio-technical system, complementing “infrastructure-centred” approaches. This paper presents a methodology for integrating two modelling tools, a social simulation model and an urban water management tool. An agent based model, the Urban Water Agents’ Behaviour, is developed to simulate the domestic water users’ behaviour in response to water demand management measures and is then coupled to the Urban Water Optioneering Tool to calculate the evolution of domestic water demand by simulating the use of water appliances. The proposed methodology is tested using, as a case study, a major period of drought in Athens, Greece. Results suggest that the coupling of the two models provides new functionality for water demand management scenarios assessment by water regulators and companies.

tree canopy volume

I had never thought about modeling tree canopy volume in 3D before. I’ve played around with simple algorithms to place trees on a map, assume a mature canopy area per tree, and estimate the total canopy area. This is useful because cities sometimes set targets and metrics in terms of number of trees, and sometimes in terms of tree canopy. The latter is better because it is more relatable to other goals a city might have related to the hydrologic cycle, carbon, heat, air quality, aesthetics and property values, biodiversity and habitat, and the financial cost to public offers of achieving these goals. Once you have an algorithm relating number of trees to canopy area, you can add more variables like type of tree, growth over time, and some assumed attrition rate or half life. Come to think of it, I have played around with leaf area index which is a quasi-3D concept. Anyway, without further ado here is the article that prompted my line of thought:

Local Impact of Tree Volume on Nocturnal Urban Heat Island: A Case Study in Amsterdam

The aim of this research is to quantify the local impacts of tree volumes on the nocturnal urban heat island intensity (UHI). Volume of each individual tree is estimated through a 3D tree model dataset derived from LIDAR data and modelled with geospatial technology. Air temperature is measured on 103 different locations of the city on a relatively warm summer night. We tested an empirical model, using multi-linear regression analysis, to explain the contribution of tree volume to UHI while also taking into account urbanization degree and sky view factor at each location. We also explored the scale effect by testing variant radii for the aggregated tree volume to uncover the highest impact on UHI. The results of this study indicate that, in our case study area, tree volume has the highest impact on UHI within 40 meters and that a one degree temperature reduction is predicted for an increase of 60,000 m3 tree canopy volume in this 40 meter buffer. In addition, we present how geospatial technology is used in automating data extraction procedures to enable scalability (data availability for large extents) for efficient analysis of the UHI relation with urban elements.

science communication

Here’s an article from Hydrology and Earth System Sciences on better communication between scientists and the lay audience. An excerpt:

More theoretical knowledge on science communication can provide useful background knowledge for geoscience communicators, and thereby positively contribute to the geoscience debate in society. With this latter goal in mind, in this paper, we provide geoscientists with a review of relevant general science communication the- 5 ory. We focus on television appearances of (geo)scientists, though much of the research/themes/literature discussed also holds for popular-scientific presentations or for interactions with newspaper journalists. We use the term television loosely, also applying it to programs that appear online.

In this review we discuss six major themes in science communication research 10 related to television performances: scientist motivation (Sect. 2), target audience (Sect. 3), jargon and information transfer (Sect. 4), narratives and storytelling (Sect. 5), relationship between scientists and journalists (Sect. 6), and stereotypes of scientists (Sect. 7). For each theme we make the results from the literature tangible by analyzing a television appearance of a geoscientist from a science communication perspective. 15 For these case studies we use examples for which we had background information on behind-the-scenes discussions and negotiations, namely television appearances of authors Hut and Stoof.

connectivity and corridors

From Conservation Biology:

Connecting science, policy, and implementation for landscape-scale habitat connectivity

In an increasingly fragmented world, networks of habitat corridors are critical to support movement of organisms between habitat patches and the long-term persistence of species. The science of corridor design and the policy of corridor establishment are developing rapidly, but often independently. Here we assess the links between the science and policy of habitat corridors, to better understand how corridors can be effectively implemented, with a focus on a suite of landscape-scale connectivity plans in tropical and sub-tropical Asia. Our synthesis suggests that the process of corridor designation may be more efficient if the scientific determination of optimal corridor locations and arrangement is synchronized in time with the achievement of political buy-in and policy direction for corridor designation. Land tenure and the intactness of existing habitat in the region are also critical factors –optimal connectivity strategies may be very different if there are few, versus many, political jurisdictions (including commercial and traditional land tenures) and intact versus degraded habitat between patches. We identify financing mechanisms for corridors, and also several important gaps in our understanding of effective corridor design including how corridors, particularly those managed by local communities, can be protected from habitat degradation and unsustainable hunting. Finally, we point to a critical need for quantitative, data-driven models that can prioritize potential corridors or multi-corridor networks based on their relative contributions to long-term metacommunity persistence.

not time to worry about Trump

I’ve been asking whether Trump is using a fascist playbook and we should be worried, but here are a couple articles that say no, it is not time to worry. Both point out that the share of media coverage a candidate gets does not have much to do with the proportion of voters that actually support them. Alternet says that more likely voters support Bernie Sanders than Donald Trump, although the latter gets 23 times more media attention:

The Tyndall Report, which tracks coverage on nightly network newscasts, found that Trump has hogged more than a quarter of all presidential race coverage — and more than the entire Democratic field combined.

Hillary Clinton — who enjoys the most voter support, by far, of any candidate in either party — had received the second-most network news coverage.

Sanders, who is supported by more voters than Trump, has received just 10 minutes of network airtime throughout the entire campaign — which translates to 1/23 of Trump’s campaign coverage.

Nate Silver tries to break it down some more. One interesting data analysis he has done shows that at this point in the last two elections, only 8-16% of all Google searches that would eventually be made related to the primaries had been made. So let’s hope that means that only the crazies on the fringe are paying attention at this point.

more on Robert Paxton

Recently I was musing about the U.S., Donald Trump, and fascism. I suggested that the U.S. has a fairly rigid social order favoring and enforced by traditional political, bureaucratic, business and professional elites. We also have a grassroots movement based on rhetoric of national, religious, and to some extent racial unity, and fearful of outsiders. Here is Robert Paxton’s definition of fascism in his 1998 paper The Five Stages of Fascism:

Fascism is a system of political authority and social order intended to reinforce the unity, energy, and purity of communities in which liberal democracy stands accused of producing division and decline.

So fascism is not just a social order cynically maintained by and for the interests of traditional elites. It is somewhat the opposite – a grassroots movement based on a myth of national, religious or racial purity and unity. The grassroots believe the rhetoric while the elites probably do not, but both are interested in maintaining the social order. The danger arises when the traditional elites cynically choose to join forces with the grassroots fascists, because they do not feel strong enough to maintain the existing social order on their own. Together, the two groups are strong enough to come to power where neither could on its own, but once in power the traditional elites may lose control, particularly under war or crisis conditions.

So ironically, it is at a moment when liberal elements in society are making some progress against the entrenched elites that we may be most vulnerable to a right-wing grassroots movement arising. The Tea Party’s anti-immigrant and anti-Muslim rhetoric, and Donald Trump’s attempts to use that rhetoric to rise to power, would seem to fit the bill. It is ironic that a modern American fascism would use rhetoric of freedom and democracy to undermine freedom and democracy, but that is our national unifying myth so it makes some sense.

cyberattacks and superflares

Need some new things to worry about? Look no further!

  1. a catastrophic cyberattack on the U.S. electric infrastructure

In this New York Times bestselling investigation, Ted Koppel reveals that a major cyberattack on America’s power grid is not only possible but likely, that it would be devastating, and that the United States is shockingly unprepared.
 
Imagine a blackout lasting not days, but weeks or months. Tens of millions of people over several states are affected. For those without access to a generator, there is no running water, no sewage, no refrigeration or light. Food and medical supplies are dwindling. Devices we rely on have gone dark. Banks no longer function, looting is widespread, and law and order are being tested as never before.

It isn’t just a scenario. A well-designed attack on just one of the nation’s three electric power grids could cripple much of our infrastructure—and in the age of cyberwarfare, a laptop has become the only necessary weapon. Several nations hostile to the United States could launch such an assault at any time. In fact, as a former chief scientist of the NSA reveals, China and Russia have already penetrated the grid. And a cybersecurity advisor to President Obama believes that independent actors—from “hacktivists” to terrorists—have the capability as well. “It’s not a question of if,” says Centcom Commander General Lloyd Austin, “it’s a question of when.”

2. in case people are not enough to worry about, the Sun could turn on us.

Astrophysicists have discovered a stellar “superflare” on a star observed by NASA’s Kepler space telescope with wave patterns similar to those that have been observed in the Sun’s solar flares. (Superflares are flares that are thousands of times more powerful than those ever recorded on the Sun, and are frequently observed on some stars.)

The scientists found the evidence in the star KIC9655129 in the Milky Way. They suggest there are similarities between the superflare on KIC9655129 and the Sun’s solar flares, so the underlying physics of the flares might be the same…

Typical solar flares can have energies equivalent to a 100 million megaton bombs, but a superflare on our Sun could release energy equivalent to a billion megaton bombs.

Low-cost solution to the grid reliability problem

I have heard from know-it-alls that the problem with renewable energy is that it is intermittent and hard to store. I have always thought there are many ways to deal with that – charge a battery, pump water uphill, heat something, wind a spring, compress air, electrolyze water into hydrogen and charge a fuel cell. Those are my thoughts with absolutely no expertise at all, but luckily the experts are thinking about this too:

Mark Z. Jacobson, Mark A. Delucchi, Mary A. Cameron, and Bethany A. Frew. A low-cost solution to the grid reliability problem with 100% penetration of intermittent wind, water, and solar for all purposes. PNAS 2015; DOI: 10.1073/pnas.1510028112, 2015

This study addresses the greatest concern facing the large-scale integration of wind, water, and solar (WWS) into a power grid: the high cost of avoiding load loss caused by WWS variability and uncertainty. It uses a new grid integration model and finds low-cost, no-load-loss, nonunique solutions to this problem on electrification of all US energy sectors (electricity, transportation, heating/cooling, and industry) while accounting for wind and solar time series data from a 3D global weather model that simulates extreme events and competition among wind turbines for available kinetic energy. Solutions are obtained by prioritizing storage for heat (in soil and water); cold (in ice and water); and electricity (in phase-change materials, pumped hydro, hydropower, and hydrogen), and using demand response. No natural gas, biofuels, nuclear power, or stationary batteries are needed. The resulting 2050–2055 US electricity social cost for a full system is much less than for fossil fuels. These results hold for many conditions, suggesting that low-cost, reliable 100% WWS systems should work many places worldwide.

Peter Checkland

Peter Checkland is another system thinker that I have just discovered. Apparently he is well-known, but I find that systems thinkers are buried in a variety of disciplines, in this case management, and I wasn’t looking there.

This is from a 2000 journal article, Soft Systems Methodology: A Thirty Year Retrospective:

Although the history of thought reveals a number of holistic thinkers — Aristotle, Marx, Husserl among them — it was only in the 1950s that any version of holistic thinking became institutionalized. The kind of holistic thinking which then came to the fore, and was the concern of a newly created organization, was that which makes explicit use of the concept of ‘system’, and today it is ‘systems thinking’ in its various forms which would be taken to be the very paradigm of thinking holistically. In 1954, as recounted in Chapter 3 of Systems Thinking, Systems Practice, only one kind of systems thinking was on the table: the development of a mathematically expressed general theory of systems. It was supposed that this would provide a meta-level language and theory in which the problems of many different disciplines could be expressed and solved; and it was hoped that doing this would help to promote the unity of science.

These were the aspirations of the pioneers, but looking back from 1999 we can see that the project has not succeeded. The literature contains very little of the kind of outcomes anticipated by the founders of the Society for General Systems Research; and scholars in the many subject areas to which a holistic approach is relevant have been understandably reluctant to see their pet subject as simply one more example of some broader ‘general system’!

But the fact that general systems theory (GST) has failed in its application does not mean that systems thinking itself has failed. It has in fact flourished in several different ways which were not anticipated in 1954. There has been development of systems ideas as such, development of the use of systems ideas in particular subject areas, and combinations of the two. The development in the 1970s by Maturana and Varela (1980) of the concept of a system whose elements generate the system itself provided a way of capturing the essence of an autonomous living system without resorting to use of an observer’s notions of ‘purpose’, ‘goal’, ‘information processing’ or ‘function’. (This contrasts with the theory in Miller’s Living Systems (1978), which provides a general model of a living entity expressed in the language of an observer, so that what makes the entity autonomous is not central to the theory.) This provides a good example of the further development of systems ideas as such. The rethinking, by Chorley and Kennedy (1971), of physical geography as the study of the dynamics of systems of four kinds, is an example of the use of systems thinking to illuminate a particular subject area.

It’s sad to me to see his contention that general systems theory has failed.  It should be a central, foundational body of knowledge that people are trained in before they apply their focus to narrower fields. I have said many times, this would give a wider variety of intelligent people a shared body of knowledge, vocabulary, and respect for each other’s pursuits, and might accelerate the pace of innovation.

Watson vs. Shalmaneser

A class at Georgia Tech did an experiment where artificial intelligence (“Watson”) was used to “enhance human creativity”. It sounds like a cool class:

Following research on computational creativity in our Design & Intelligence Laboratory (http://dilab.gatech.edu), most readings and discussions in the class focused on six themes: (1) Design Thinking is thinking about illstructured, open-ended problems with ill-defined goals and evaluation criteria; (2) Analogical Thinking is thinking about novel situations in terms of similar, familiar situations; (3) Meta-Thinking is thinking about one’s own knowledge and thinking; (4) Abductive Thinking is thinking about potential explanations for a set of data; (5) Visual Thinking is thinking about images and in images; and (6) Systems Thinking is thinking about complex phenomena consisting of multiple interacting components and causal processes. Further, following the research in the Design & Intelligence Laboratory, the two major creative domains of discussion in the class were (i) Engineering design and invention, and (ii) Scientific modeling and discovery. The class website provides details about the course (http://www.cc.gatech.edu/classes/AY2015/cs8803_spring)

Here’s how they actually went about using the computer:

The general design process followed by the 6 design teams for using Watson to support biologically inspired design may be decomposed into two phases: an initial learning phase and a latter open-ended research phase. The initial learning phase proceeded roughly as follows. (1) The 6 teams selected a case study of biologically inspired design of their choice from a digital library called DSL (Goel et al. 2015). For each team, the selected case study became the use case. (2) The teams started seeding Watson with articles selected from a collection of around 200 biology articles derived from Biologue. Biologue is an interactive system for retrieving biology articles relevant to a design query (Vattam & Goel 2013). (3) The teams generated about 600 questions relevant to their use cases. (4) The teams identified the best answers in their 200 biology articles for the 600 questions. (5) The teams trained Watson on the 600 question-answer pairs. (6) The 6 teams evaluated Watson for answering design questions related to their respective use cases.

The value of the computer seems to be in helping the humans sort through and screen and enormous amount of literature in a short time that otherwise could take years to go through. This theoretically could accelerate progress by allowing us to make connections that otherwise could not be made. There are going to be some brilliant ideas out there that are stuck in a dead end where they never got to the people who can use them. And there are going to be many more brilliant ideas that emerge only when older ideas are connected.

These students seem to have restricted themselves to a research database in one field (biology). But I think it could be very valuable to cross disciplinary boundaries and look for analogous ideas – let’s say, in thermodynamics, ecology, and economics. Or sociology and animal behavior. These are boundaries that have been crossed by just a few visionary people, but are often ignored by everyone else. If making connections was more of a standard practice, many more brilliant ideas would escape the information cul-de-sacs.

This reminded me of the novel Stand on Zanzibar, where “synthesist” is a job. The world is not doing so well, and governments are seeking out unconventional thinkers to try to synthesize knowledge across multiple fields and try to come up with new problems. There is also an artificial intelligence in the book as I recall, but I don’t remember it being involved in the synthesis. I don’t have a copy of the book, and this particular piece of human knowledge and creativity is walled off from me by “intellectual property” law, so I can’t benefit from it or connect it to anything else right now.