Category Archives: Peer Reviewed Article Review

reduced work week as a carbon emissions strategy

Reducing the work week to four days would reduce carbon emissions.

Worktime Reduction as a Solution to Climate Change: Five Scenarios Compared for the UK

Reducing working hours in an economy has been discussed as a policy which may have benefits in achieving particular economic, social and environmental goals. This study proposes five different scenarios to reduce the working hours of full-time employees by 20% with the aim of cutting greenhouse gas emissions: a three-day weekend, a free Wednesday, reduced daily hours, increased holiday entitlement and a scenario in which the time reduction is efficiently managed by companies to minimise their office space. We conceptually analyse the effects of each scenario on time use patterns through both business and worker activities, and how these might affect energy consumption in the economy. To assess which of the scenarios may be most effective in reducing carbon emissions, this analytical framework is applied as a case study for the United Kingdom. The results suggest that three of the five scenarios offer similar benefits, and are preferable to the other two, with a difference between the best and worst scenarios of 13.03 MTCO2e. The study concludes that there is a clear preference for switching to a four-day working week over other possible work-reduction policies.

reservoirs, resilience and system dynamics

This article in Water Resources Research uses a system dynamics simulation to examine the resilience of a reservoir. Some of these concepts may be adaptable to other types of water resources systems or systems in general.

Comparison of static and dynamic resilience for a multipurpose reservoir operation

Reliability, resilience and vulnerability are the traditional risk measures used to assess the performance of a reservoir system. Among these measures, resilience is used to assess the ability of a reservoir system to recover from a failure event. However, the time independent static resilience does not consider the system characteristics, interaction of various individual components and does not provide much insight into reservoir performance from the beginning of the failure event until the full performance recovery. Knowledge of dynamic reservoir behavior under the disturbance offers opportunities for proactive and/or reactive adaptive response that can be selected to maximize reservoir resilience. A novel measure is required to provide insight into the dynamics of reservoir performance based on the reservoir system characteristics and its adaptive capacity. The reservoir system characteristics include, among others, reservoir storage curve, reservoir inflow, reservoir outflow capacity and reservoir operating rules. The reservoir adaptive capacity can be expressed using various impacts of reservoir performance under the disturbance (like reservoir release for meeting a particular demand, socio-economic consequences of reservoir performance, or resulting environmental state of the river upstream and downstream from the reservoir). Another way of expressing reservoir adaptive capacity to a disturbing event may include aggregated measures like reservoir robustness, redundancy, resourcefulness and rapidity. A novel measure that combines reservoir performance and its adaptive capacity is proposed in this paper and named ‘dynamic resilience’. The paper also proposes a generic simulation methodology for quantifying reservoir resilience as a function of time. The proposed resilience measure is applied to a single multi-purpose reservoir operation and tested for a set of failure scenarios. The dynamic behavior of reservoir resilience is captured using the system dynamics simulation approach, a feedback-based object-oriented method, very effective for modelling complex systems. The results of dynamic resilience are compared with the traditional performance measures in order to identify advantages of the proposed measure. The results confirm that the dynamic resilience is a powerful tool for selecting proactive and reactive adaptive response of a multipurpose reservoir to a disturbing event that cannot be achieved using traditional measures. The generic quantification approach proposed in the paper allows for easy use of dynamic resilience for planning and operations of various civil infrastructure systems.

“virgin soil epidemics” in the Americas

This is a seminal 1976 paper by Alfred Crosby on the epidemics that devastated Native Americans after Europeans first came. I’m sure there is plenty of scholarly work since then that may have refined this, but it is horrifying even if some of the details have changed. The most extreme estimates are that as many as 100 million people lived in the Americas pre-Columbus, or one-sixth of all humans alive at the time, and only a few million survived. If true, this is much worse than the Black Death in Europe. This would mean that Native American civilizations might have been equivalent in size and sophistication to European and Asian ones. We just don’t know.

I think this is also a cautionary tale for what a novel disease or combination of novel diseases could do to our current civilization, whether natural or man-made. He does point out though that genetic factors and never having been exposed before were only some of the factors. People at the time did not understand quarantine for example, and some practices for dealing with the dead led to more contagion. People might have been weakened by exotic diseases like smallpox, then finished off by diseases they would have experience with like malaria or pneumonia. They didn’t understand how hydration, nutrition, and keeping warm could keep their strength up to fight off secondary infections, or else people may have been too sick to fetch water and food and keep fires burning. Hopefully we can do much better today if and when some terrible epidemic strikes.

radiation during your flight to Mars

Radiation exposure could be a problem on flights to Mars, according to Nature.

Cosmic radiation exposure and persistent cognitive dysfunction

The Mars mission will result in an inevitable exposure to cosmic radiation that has been shown to cause cognitive impairments in rodent models, and possibly in astronauts engaged in deep space travel. Of particular concern is the potential for cosmic radiation exposure to compromise critical decision making during normal operations or under emergency conditions in deep space. Rodents exposed to cosmic radiation exhibit persistent hippocampal and cortical based performance decrements using six independent behavioral tasks administered between separate cohorts 12 and 24 weeks after irradiation. Radiation-induced impairments in spatial, episodic and recognition memory were temporally coincident with deficits in executive function and reduced rates of fear extinction and elevated anxiety. Irradiation caused significant reductions in dendritic complexity, spine density and altered spine morphology along medial prefrontal cortical neurons known to mediate neurotransmission interrogated by our behavioral tasks. Cosmic radiation also disrupted synaptic integrity and increased neuroinflammation that persisted more than 6 months after exposure. Behavioral deficits for individual animals correlated significantly with reduced spine density and increased synaptic puncta, providing quantitative measures of risk for developing cognitive impairment. Our data provide additional evidence that deep space travel poses a real and unique threat to the integrity of neural circuits in the brain.

the latest from James Hansen

James Hansen and an enormous number of co-authors have a new paper in Earth System Dynamics.

Young People’s Burden: Requirement of Negative CO2 Emissions

The rapid rise of global temperature that began about 1975 continues at a mean rate of about 0.18 °C/decade, with the current annual temperature exceeding +1.25 °C relative to 1880–1920. Global temperature has just reached a level similar to the mean level in the prior interglacial (Eemian) period, when sea level was several meters higher than today, and, if it long remains at this level, slow amplifying feedbacks will lead to greater climate change and consequences. The growth rate of climate forcing due to human-caused greenhouse gases (GHGs) increased over 20 % in the past decade mainly due to resurging growth of atmospheric CH4, thus making it increasingly difficult to achieve targets such as limiting global warming to 1.5 °C or reducing atmospheric CO2 below 350 ppm. Such targets now require “negative emissions”, i.e., extraction of CO2 from the atmosphere. If rapid phasedown of fossil fuel emissions begins soon, most of the necessary CO2 extraction can take place via improved agricultural and forestry practices, including reforestation and steps to improve soil fertility and increase its carbon content. In this case, the magnitude and duration of global temperature excursion above the natural range of the current interglacial (Holocene) could be limited and irreversible climate impacts could be minimized. In contrast, continued high fossil fuel emissions by the current generation would place a burden on young people to undertake massive technological CO2 extraction, if they are to limit climate change. Proposed methods of extraction such as bioenergy with carbon capture and storage (BECCS) or air capture of CO2 imply minimal estimated costs of 104–570 trillion dollars this century, with large risks and uncertain feasibility. Continued high fossil fuel emissions unarguably sentences young people to either a massive, possibly implausible cleanup or growing deleterious climate impacts or both, scenarios that should provide both incentive and obligation for governments to alter energy policies without further delay.

wireless ECG

This paper from MIT describes a technology that can read emotions accurately by detecting heartbeats simply by bouncing a wireless signal off a person. It is supposedly as accurate as a an electrocardiogram. Reading emotions this way is pretty amazing, but to me just the idea of reading a heartbeat accurately this way sounds like a pretty big deal in a medical setting. It also could have obvious implications in psychology, and quite possibly disturbing uses in security, intelligence, military and business settings. Imagine something like Google Glass giving you information on the health and emotions of a person you are talking to.

Emotion Recognition using Wireless Signals

This paper demonstrates a new technology that can infer
a person’s emotions from RF signals reflected off his body.
EQ-Radio transmits an RF signal and analyzes its reflections
off a person’s body to recognize his emotional state (happy,
sad, etc.). The key enabler underlying EQ-Radio is a new
algorithm for extracting the individual heartbeats from the
wireless signal at an accuracy comparable to on-body ECG
monitors. The resulting beats are then used to compute
emotion-dependent features which feed a machine-learning
emotion classifier. We describe the design and implementation
of EQ-Radio, and demonstrate through a user study
that its emotion recognition accuracy is on par with state-of-the-art
emotion recognition systems that require a person
to be hooked to an ECG monitor.

tree type and heat mitigation

Here is an article on how the specific type of street tree affects the urban heat island locally, focusing on plant area index.

Microclimate benefits that different street tree species provide to sidewalk pedestrians relate to differences in Plant Area Index

The way a street tree is able to modify the local microclimate on pedestrian walkways may vary according to tree species according to key canopy and leaf characteristics, such as leaf angle, leaf size, canopy architecture or simply canopy density. Three similar north-south orientated streets, with three different tree species possessing different canopy and leaf characteristics were studied in summer 2014. Microclimatic parameters were measured on pedestrian walkways below and away from tree canopies between 06:00 and 20:00 on three cloudless days. Physiological Equivalent Temperature (PET) was estimated to indicate pedestrian thermal comfort. Microclimate conditions were measured below and away from trees at solar noon for a wide range of trees with different Plant Area Index (PAI) as determined using full-frame photography. In streets with Ulmus procera and Platanus x acerifolia trees, the microclimatic benefits were significantly greater than the street with Eucalyptus scoparia trees, however no significant differences in the estimated PET. Microclimate benefit increased with increasing PAI for all three tree species, however no significant difference in under-canopy microclimate amongst tree species when the PAI was similar. It appears that differences in PAI are paramount in determining the microclimatic and PET benefits. Obviously, certain tree species have a limit of the PAI they can achieve, and that should be considered when selecting or comparing tree species for shading and cooling benefits. This study assists urban planners and landscape professionals in selecting street tree species for cooling benefits based on the expected or managed tree canopy area.

I’d heard of Leaf Area Index before I read this abstract, but not Plant Area Index. A search for Plant Area Index on Google brings up a Wikipedia definition of Leaf Area Index as the top hit.

Leaf area index (LAI) is a dimensionless quantity that characterizes plant canopies. It is defined as the one-sided green leaf area per unit ground surface area (LAI = leaf area / ground area, m2 / m2) in broadleaf canopies.

The best explanation of the difference I could find on the internet is here:

Leaf (or needles in the case of conifers) should be seen here as a generic term for designing the above ground aeral extent of vegetation. if no distinction is made between leaves (needles) and the other elements, the proper term to use is PAI: Plant Area Index rather than LAI.

So I guess the plant area index accounts for the trunk, branches, stems, etc.

macroinvertebrates (aka worms and bugs) in rain gardens

Even though the names imply they are living ecosystems, stormwater management engineers still have a tendency to think of rain gardens and bioretention basins as inert systems. It’s good to see the profession working with other disciplines and taking soil science more seriously these days. And where most are focused on physical, chemical, and plant-based processes, a few are looking more closely at the importance of animal activity.

Soil invertebrates in Australian rain gardens and their potential roles in storage and processing of nitrogen

Research on rain gardens generally focuses on hydrology, geochemistry, and vegetation. The role of soil invertebrates has largely been overlooked, despite their well-known impacts on soil nutrient storage, removal, and processing. Surveys of three rain gardens in Melbourne, Australia, revealed a soil invertebrate community structure that differed significantly among sites but was stable across sampling dates (July 2013 and April 2014). Megadrilacea (earthworms), Enchytraeidae (potworms), and Collembola (springtails) were abundant in all sites, and together accounted for a median of 80% of total soil invertebrate abundance. Earthworms were positively correlated to soil organic matter content, but the abundances of other taxonomic groups were not strongly related to organic matter content, plant cover, or root biomass across sites. While less than 5% of total soil N was estimated to be stored in the body tissues of these three taxa, and estimated N gas emissions from earthworms (N2O and N2) were low, ingestion and processing of soil was high (e.g., up to 417% of the upper 5 cm of soil ingested by earthworms annually in one site), suggesting that the contribution of these organisms to N cycling in rain gardens may be substantial. Thus, invertebrate communities represent an overlooked feature of rain garden design that can play an important role in the structure and function of these systems.

the latest on fusion power

The dream of fusion power is not dead. In fact, the science is apparently pretty straightforward but the technology of containing the plasma safely is not. Past attempts have focused on trying to contain the plasma inside a doughnut-shaped “tokamak” but there are some new ideas on that.

Fusion nuclear science facilities and pilot plants based on the spherical tokamak

A fusion nuclear science facility (FNSF) could play an important role in the development of fusion energy by providing the nuclear environment needed to develop fusion materials and components. The spherical torus/tokamak (ST) is a leading candidate for an FNSF due to its potentially high neutron wall loading and modular configuration. A key consideration for the choice of FNSF configuration is the range of achievable missions as a function of device size. Possible missions include: providing high neutron wall loading and fluence, demonstrating tritium self-sufficiency, and demonstrating electrical self-sufficiency. All of these missions must also be compatible with a viable divertor, first-wall, and blanket solution. ST-FNSF configurations have been developed simultaneously incorporating for the first time: (1) a blanket system capable of tritium breeding ratio TBR  ≈  1, (2) a poloidal field coil set supporting high elongation and triangularity for a range of internal inductance and normalized beta values consistent with NSTX/NSTX-U previous/planned operation, (3) a long-legged divertor analogous to the MAST-U divertor which substantially reduces projected peak divertor heat-flux and has all outboard poloidal field coils outside the vacuum chamber and superconducting to reduce power consumption, and (4) a vertical maintenance scheme in which blanket structures and the centerstack can be removed independently. Progress in these ST-FNSF missions versus configuration studies including dependence on plasma major radius R 0 for a range 1 m–2.2 m are described. In particular, it is found the threshold major radius for TBR  =  ${{R}_{0}}\geqslant 1.7$ m, and a smaller R 0  =  1 m ST device has TBR  ≈  0.9 which is below unity but substantially reduces T consumption relative to not breeding. Calculations of neutral beam heating and current drive for non-inductive ramp-up and sustainment are described. An A  =  2, R 0  =  3 m device incorporating high-temperature superconductor toroidal field coil magnets capable of high neutron fluence and both tritium and electrical self-sufficiency is also presented following systematic aspect ratio studies.

more on human footprint

This study attempted to map the human footprint on the earth on a fine scale back in 2002.

The Human Footprint and the Last of the Wild

There is little debate in scientific circles about the importance of human influence on ecosystems. According to scientists’ reports, we appropriate over 40% of the net primary productivity (the green material) produced on Earth each year (Vitousek et al. 1986, Rojstaczer et al. 2001). We consume 35% of the productivity of the oceanic shelf (Pauly and Christensen 1995), and we use 60% of freshwater run-off (Postel et al. 1996). The unprecedented escalation in both human population and consumption in the 20th century has resulted in environmental crises never before encountered in the history of humankind and the world (McNeill 2000). E. O. Wilson (2002) claims it would now take four Earths to meet the consumption demands of the current human population, if every human consumed at the level of the average US inhabitant. The influence of human beings on the planet has become so pervasive that it is hard to find adults in any country who have not seen the environment around them reduced in natural values during their lifetimes—woodlots converted to agriculture, agricultural lands converted to suburban development, suburban development converted to urban areas. The cumulative effect of these many local changes is the global phenomenon of human influence on nature, a new geological epoch some call the “anthropocene” (Steffen and Tyson 2001). Human influence is arguably the most important factor affecting life of all kinds in today’s world (Lande 1998, Terborgh 1999, Pimm 2001, UNEP 2001).

Yet despite the broad consensus among biologists about the importance of human influence on nature, this phenomenon and its implications are not fully appreciated by the larger human community, which does not recognize them in its economic systems (Hall et al. 2001) or in most of its political decisions (Soulé and Terborgh 1999, Chapin et al. 2000). In part, this lack of appreciation may be due to scientists’ propensity to express themselves in terms like “appropriation of net primary productivity” or “exponential population growth,” abstractions that require some training to understand. It may be due to historical assumptions about and habits inherited from times when human beings, as a group, had dramatically less influence on the biosphere. Now the individual decisions of 6 billion people add up to a global phenomenon in a way unique to our time. What we need is a way to understand this influence that is global in extent and yet easy to grasp—what we need is a map.

Until recently, designing such a map was not possible, because detailed data on human activities at the global scale were unavailable. The fortunate confluence of several factors during the 1990s changed this situation. Rapid advances in earth observation, using satellite technology pioneered by NASA and other space agencies, meant that, for the first time, verifiable global maps of land use and land cover were available (Loveland et al. 2000). The thawing of the cold war and calls for efficiency in government meant that other sources of global geographic data, for example, on roads and railways, were released to the public by the US National Imagery and Mapping Agency (NIMA 1997). Improved reporting of population statistics at subnational levels enabled geographers to create global digital maps of human population density (CIESIN et al. 2000). Finally, advances in geographic information systems (GIS) have provided the integration technology necessary to combine these data in an efficient and reproducible manner. Although the datasets now available are imperfect instruments, they are of sufficient detail and completeness that scientists can map the influence of humans on the entire land’s surface.