Richard B. Rood, Atmospheric, Oceanic and Space Sciences, University of Michigan, Ann Arbor
Paul N. Edwards, School of Information, University of Michigan, Ann Arbor
Background
In 2006 at the University of Michigan, a group of students working on projects needed information on the future of Earth’s climate for their solution strategies. Knowing of numerous efforts to provide free and easy access to climate data from both observations and models, the professor (Rood) referred students to the Climate Model Intercomparison Project phase 3 (CMIP3) archive and the Earth System Grid.
A year later, not one of the student groups had managed to obtain usable CMIP3 data. Instead, all students had used the MAGICC / SCENGEN data (Model for the Assessment of Greenhouse-gas Induced Climate Change / A Regional Climate SCEnario GENerator), which has a more mechanistic approach than the CMIP3 models. In informal interviews, these students cited technical barriers such as unknown data formats. They also cited value barriers: the MAGICC / SCENGEN model was known, trusted, and easily available. Working with CMIP3 data would require much more effort without any certainty of reward. This inability to use data from state-of-the-art models has broad implications: educational, scientific and programmatic.
Now, eight years later, we have been teaching an experimental University of Michigan course on Climate Change Informatics. Our class includes 18 masters-level students in informatics (Edwards, 12 students) and climate science (Rood, six students). The course centered around projects for two clients: a large water management agency in Florida, and researchers at the Centers for Disease Control interested in the effects of heat on human health. Both clients hoped to use climate data – historical observations as well as modeled future data – to evaluate issues such as rainfall patterns and heat-wave probabilities at approximately a county scale. Both already had considerable experience with climate data and sought better ways to evaluate and to choose among available datasets.
In the early weeks of the course, we explained the importance of the CMIP5 models (phase 5 of CMIP) and how the students could retrieve these data from the Earth System Grid Federation (ESGF). We also exposed them to a variety of other Web sources for climate data. As an initial exercise, we asked the students to open accounts and to download and to work with some data files. They logged their experiences. The climate science students were generally able to obtain and manipulate data, however, most of the informatics students failed altogether.
All of the students reported encountering frustrating roadblocks to this seemingly simple task. For example, one atmospheric science student reported that “none of the [ESGF nodes] had working links in the ‘Information’ section. On occasion I was redirected to a Github Wiki; however, I had to navigate to a different page, which redirected me to yet another page, in order to access the information I was originally searching for on the ESGF node page.” Of note, both of our course’s client organizations — who already had considerable experience working with climate information — also reported great difficulty in making use of CMIP5 data without direct assistance from climate scientists.
In this article, we argue for a fundamental refocusing of cyberinfrastructure to support not only climate science per se, but also the broader use of climate information. We start by recognizing that human contributions – direct communication with experts – are often required to make climate information usable. We posit that through examination of real-world examples, we can identify patterns that treat data acquisition, analysis, and application as an integrated workflow rather a collection of unrelated tools. We maintain the importance of open, community-based approaches to support development of standards and shared services. Our ultimate goal is to accelerate the use of state-of-the-art climate information in management and planning.
Data access is not the main problem
In a 2011 request for proposals, NASA stated “scientists and engineers spend more than 60% of their time just preparing the data for model input and data-model intercomparison.”Based on our experiences of the use of climate observations and model projections, both inside and outside of the community of scientists, we assert that the inefficiencies suggested by this quotation drastically underestimate the challenges to using climate data.
When networked climate data access was first conceived in the 1990s, the performance and the diversity of information technology were primitive by today’s standards. As these systems were deployed, they naturally focused on those directly invested (funded) in successful data use. For example, NASA pioneered the Earth Observing System Data Information System (EOSDIS) in the 1990s. Initially, EOSDIS had ambitions to set standards and provide seamless, rapid, open access to NASA satellite data [6]. Many lessons were learned from this experience. Above all, top-down, centralized design, implementation, and maintenance proved a poor strategy for building community standards for data processing, access, and use. There is simply more volatility and complexity in data, technology, and user needs than can be accommodated by a command-and-control approach.
This experience aligns with a tension frequently observed in infrastructure studies: where system designers tend to seek comprehensive capabilities (which they control), users seek specific functionality instead — and will whenever possible design their own, ad hoc gateways, kludging together multiple systems to achieve their aims (Edwards et al., 2007; Edwards, 2010). It is unlikely that any centralized approach can provide services with comparable suitability.
Since the 1990s, the scope of climate data applications has expanded tremendously, and climate information systems are struggling to keep pace. Examples of the numerous efforts to improve accessibility to climate data include the Global Change Master Directory, the Earth System Grid, the USGS Geo Data Portal, Reanalysis Datasets at the Physical Science Division of NOAA’s Earth System Research Laboratory and most recently the Climate.Data.Gov website announced in March 2014. The National Climatic Data Center provides large collections of historical observations. Organizations such as RealClimate.org provide lists of selected data sources. Recently, non-governmental services such as Climate Wizard and Microsoft FetchClimate have emerged to enable access and visualization of climate change information. All of these data portals and catalogs come with the implication that the website owners judge the listed datasets to be of significant value – an implicit signal of trustworthiness.
These tools and capabilities have improved the ability to access and use climate data; they have provided useful pieces or building blocks. Yet as easily discovered, few if any climate data portals currently present their resources in a form that is both well-documented and readily understandable to users who need to analyze and synthesize climate data in problem solving. Overpeck et al. (2011) nicely summarize the need:
[Today] …a much larger community of diverse users clamors to access, understand, and use climate data. These include an ever-increasing range of scientists (ecologists, hydrologists, social scientists, etc.) and decision-makers in society who have real money, livelihoods, and even lives at stake (resource managers, farmers, public health officials, and others). Key users also include those with public responsibilities, as well as their constituents in the general public who must support and understand decisions being made on their behalf. As a result, climate scientists must not only share data among themselves, but they must also meet a growing obligation to facilitate access to data for those outside their community and, in doing so, respond to this broader user community to ensure that the data are as useful as possible.
For these potential users, retrieving climate datasets is only the first step in a work process. Later steps, such as evaluating and tailoring the information for the task at hand, require considerable background knowledge and expert judgment. For example, many users’ instincts are to seek one “best” simulation run for future climate – an approach few climate scientists would endorse. Thus access per se is not the greatest challenge faced by scientists and practitioners (Barsugli et al., 2013).
Case study: applying climate data to human health concerns
A concrete case study demonstrates the challenges of bringing together the data and knowledge to address a specific problem. Our research question was: How can we improve our ability to protect people from dangerous heat waves, both now and in the future? The focus was on Detroit, Michigan. The research team included epidemiologists, public health officers, statisticians, remote sensing scientists and climate scientists. The range of expertise included graduate students, established researchers and non-climate scientist professionals. Results from this research are described in White-Newsome et al., 2009, Zhang et al., 2011, and Oswald et al., 2012.
The heat-wave project required temperature data as a primary variable. A number of ancillary environmental data types were also potentially useful, such as moisture, wind and cloudiness. Land-surface data were needed, for example, urban, rural and water, along with measures of the built environment such as surface permeability. Data on the geographic distribution of people were needed, as well as a way to determine how vulnerable they are to heat. Demographic data on children, the elderly, and the infirm could help, as well as information about building types and the availability of air conditioning. Interview-based data were required to understand how people might learn of a heat health warning, and how they might respond. Finally, health data that link heat to morbidity and mortality were needed. Here, we focus on the initial steps associated only with the environmental data.
The potential types of environmental data were assets of several federal agencies. In many problems involving climate change and adaptation, the National Climatic Data Center (NCDC) is the principal source of surface meteorological observations from stations. Surface-station observations are known and trusted by local planners and managers, and they are integral to answering the question “what has happened in the past?” Satellite data are often posed as a complement to the surface-station observations, but their relevance needs to be evaluated, particularly in applications at ground level. Focusing on temperature, satellite data from Moderate Resolution Imaging Spectrometer (MODIS), Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) and the Modern-Era Retrospective Analysis (MERRA) were considered in our problem.
To build on the pioneering research investigating MODIS land-surface temperature in urban heat studies (e.g., Jin et al., 2005), an essential first step was to compare weather-station data in Detroit with MODIS land-surface temperature. A Google search on “MODIS surface temperature” yielded promising data sources at the Institute for Computational Earth Science and the Land Processes Distributed Active Archive Center. There, the user guide[xix] provided the following as the second paragraph following a brief introduction.
In the early days, the MODIS LST products MOD11_L2, MOD11A1, and MOD11B1 had been validated at stage 1 with in situ measurements in more than 50 clear-sky cases in the temperature range of 263-331K and the column water vapor range of 0.4-4cm, most of them presented in published papers… Detailed validation of the C6 MODIS LST product is given in the most recently published paper (Wan, 2014). Please use the C6 MODIS LST product in your applications because its quality is much better than the qualities of the C4.1 LST product (generated by the V4 algorithm from C5 input data) and C5 products (generated by the V5 algorithm from C5 input data).
As documentation, this paragraph was usable, at most, by experts in the MODIS community.
“As a logical next step, our research team asked the obvious question: Who do we know? We sought guidance from scientists, often friends, who are part of the MODIS project. This subterfuge could be viewed as a private workaround to, or leveraging of, the data center’s public online services; that is, had we not had such a contact, the barriers to access and/or understanding would have been simply too high, causing us to abandon further exploration. In our case, consultation with these experts determined the potential benefit for our study to be small relative to the amount of work required to acquire, understand, manipulate and evaluate the MODIS dataset. Since the MODIS data were not essential for our research, we – like the students in our opening example – left this dataset behind, because the necessary chain of infrastructure services and data systems was incomplete. The result: Not only were we unable to engage a potentially valuable dataset, but any potential contributions of our study to the evaluation of the MODIS data product also remain unrealized.”
Therefore, we focused on the weather-station data from NCDC to provide a standard. Even here, determining the most appropriate weather-station datasets required difficult and time-consuming evaluation; we faced issues of data homogeneity, representativeness and quality assessment. These challenges ranged from deletion or correction of errors identified by previous researchers, to changes in quality control procedures applied to the same base datasets, to changed protocols in formatting and metadata. Our evaluation relied on published literature – and, again, on ad hoc communication with more science and data experts across the country and in the data center. The process we followed, requiring many months, had much in common with other investigations that have examined the same datasets. There was extensive rediscovery of already held knowledge. The need to evaluate climate data for its suitability in an application remains necessary and intrinsically inhibits the ability to build seamless, automated data systems.
Climate informatics: metadata and the need for human experts
Our experiential description of barriers to using climate data is corroborated by reports from the National Research Council (2009), and program reports such as the National Science Foundation’s EarthCube. More effective climate data services require a specific focus on the processes of data acquisition and use; in particular, they need a stronger focus on the interface between human and information technology systems (Edwards et al., 2007; Cummings et al., 2008). The development of disconnected new tools within the traditional constructs of agency programs is not likely to address these problems of integration and synthesis.
Beyond access, what barriers do different classes of users face? The non-climate-scientist user seeking state-of-the-art information is lost when faced with unfamiliar formats and arcane documentation. Meanwhile, even the established climate scientist immediately has a set of sophisticated questions about data quality, drawn from experiences in previous applications. Regardless of position, those who require climate data typically enter into a slow process of acquiring and understanding datasets based on ad hoc communication with peers, established scientists, data experts in their departments, and experts at various data centers. More than finding accessible data archives, then, we are faced with a problem of finding people who hold the knowledge associated with particular datasets and classes of data.
As we saw in the foregoing case study, in problem after problem with students, practitioners and discipline experts, we find that successful users must identify and consult with science and data experts in laboratories and data centers across the country. These experts facilitate finding the right data set(s), help navigate idiosyncratic access methods and provide guidance on strengths and weaknesses of the various datasets. Acquiring, analyzing, and understanding the nuances of the datasets follow this. In other words, in most cases non-climate-scientist users devote quite substantial effort to rediscovering knowledge about datasets before they can make use of them.
This can be viewed as a problem of metadata. Notwithstanding the “build it and they will come” rhetoric concerning open data (e.g., Nielsen, 2011), existing metadata are in fact rarely sufficient for users outside the scientific communities within which data were originally produced. Broad focus on metadata products – i.e., written or otherwise recorded documentation – has tended to obscure the equally, if not more important metadata processes of direct, often informal communication with experts on particular datasets and information systems (Edwards et al., 2011). This pattern is not unique to climate science. It arises in most if not all cutting-edge sciences, where intense focus on the next research project typically comes at the expense of turning data or models into productive commodity resources for users lacking direct training in the field.
Climate data systems should explicitly and prominently incorporate a number of ways for users to find and contact data experts directly. Such a suggestion will doubtless be met with concern. Climate data experts already have plenty to do; are we asking them to become tech support for random citizens? No. Rather, we are talking about the need to design information technology that works more like human-to-human communication, thereby incorporating foundational information that data experts and scientists bring to the problem.
Training, example problems, glossaries, instructions – these types of documentation are essential connective tissue in usable end-to-end data systems. Such documentation might take a hint from current commercial software trends. For example, video walkthroughs of routine procedures such as data access and faceted search are simple to make and, for many, easier to understand than text instructions. Similarly, technical terms require definitions, even for experts. The lack of definitions is a notable barrier to usability, especially if a user has to leave a page to go find definitions. Strategies to improve glossaries, for example popup definitions accessible by mouseover, are needed. Help forums, now commonplace, pool users’ own expertise and also provide a platform for data managers to answer questions that may recur.
Direct communication with busy data experts should be reserved for more complex needs — but it should be readily available, courteous, and competent. Currently, climate data services might be compared to a novice buying a set of wood chisels with many shapes and sizes of blades, but having no training on how to use the tools in the toolbox.
Design of climate information systems: some missing lessons
Scientists, information technologists, scientific program managers, and federal agencies have put considerable effort into access and availability, yet the barriers to usability of climate data remain dauntingly high. Even the most sophisticated existing services do not provide end-to-end information systems or workflows whose design and function span the fundamental range of effective informatics: storage, search, access, processing, and interface design. Most importantly, most climate data services lack the multi-leveled communication of meaning and function required to serve diverse data users, whose conceptual frameworks and native “languages” differ from those of professional climate researchers. We call this missing communication “translational information,” noting that just as in the case of natural language, human interpreters (for example, trained climate data experts) still provide far more useful, efficient, and accurate assistance than any static documentation, dictionary, or automated translation.
In addition, most existing climate data services suffer from three classic problems in information systems development. First, when system designers are also expert users, they are blind to how such systems appear to novices (Landauer, 1995). This results in an ingrained belief that since the designer finds the system simple and obvious, incompetence must be to blame for any problems other users encounter; solutions are sought in “configuring the user” rather than redesign. Second, developers can become trapped in updating legacy systems that “work” (more or less) for their principal clients, rather than completely rethinking system design for less central, occasional users with whom the developers are mostly unfamiliar. Third, few if any climate data services have applied the user-centered design methods now normative in the field of human-computer interaction (HCI), such as task analysis, interaction design, scenario-based development, storyboarding, and ethnographic interviews (Rogers et. al., 2011; Rosson and Carroll, 2009; Diaper and Stanton, 2003; Rosson and Carroll, 2002).
Instead, the approach is to build a portal or a piece of software first, based mainly on the designers’ mental model of user needs (Norman, 1988). That mental model is usually based on those who make the most use of climate data, i.e., climate scientists. The portal or software is then thrown over the wall for beta testing and eventual adjustment. Actual users (especially infrequent, non-expert ones) are rarely canvassed, nor are their capabilities assessed. Conversely, the user community’s own, often-sophisticated tools are unknown to the designer. Most designers do not test or evaluate a wide range of prototypes across a variety of real-use cases outside the climate science community. Thus existing systems are either implicitly designed for the climate expert or, to the contrary, for a too broadly conceived end-user community ranging from scientists to the general public. The result is a plethora of useful information that is not, in fact, usable (Lemos and Rood, 2010).
Note that these issues are fundamentally about communication, scoping and design methodologies — not software engineering or technology. They can be addressed using some or all of the following strategies.
• Ethnographic interviews (participant observation)
• Collecting and studying use cases from a variety of areas
• Task and scenario analysis
• User-centered, iterative interface design
• Formal usability evaluation, both during and after prototyping.
Since the primary issues of usable data systems are related to communication, it is a design error to imagine that one computational interface can span all communication needs and styles. Attempting to span too large a range of uses and users yields interfaces and capabilities that serve no community well. A possible way forward might be to build profiles of discipline-specific users, for example, the watershed management community. The resulting profiles play a role in building use cases, which describe background knowledge, capabilities, the software commonly employed and the kinds of problems encountered for which climate data might be of assistance. Such profiles and use cases could provide designers with a much more detailed and explicit model of the users for whom they are designing.
Conclusions
Barriers due to fragmentation of tools and services are not confined to climate science. Such barriers are a natural outcome of how we currently organize science and information technology development. They are exacerbated by a growing demand for climate data and knowledge by those outside of the climate-science discipline. In order to develop more effective data systems and services, it is necessary to pay attention to the end-to-end system. This focus is fundamental, yet it remains dauntingly complex. Without a focus on the end-to-end system, there is little evidence that we will realize the potential of the climate assets that have been built.
Lemos and Morehouse (2005) suggest that more research is needed into the use of climate data and information. This and other scholarship notes the importance of the end-to-end system that supports not only the provision of data, but also of narratives to support and to interpret that data. Such narratives describe how, for whom, and in what situations data can be useful. (Borgman, forthcoming 2014). This end-to-end system imparts values to climate information, such as trustworthiness, and it stands to improve the robustness of the scientific process and the use of science-based knowledge in planning and management.
The case study and discussion of functionality in the above sections serve as a template bringing focus to the information and knowledge system as a whole, i.e., the knowledge infrastructure (Edwards et al., 2013). Such analysis and more complete work breakdowns reveal successes, bottlenecks, and gaps in the acquisition of knowledge about datasets. Studying work breakdowns may reveal common workflow structures or sequences.
There is substantial evidence that the challenges of data usability cannot be met by any single organization’s effort to build a “seamless” system. The complexity is simply too great. We document here a number of important points:
1. Human experts are an integral part of the information system. Rather than design the human out of the information system, effort should be focused on collecting the needed human expertise and improving the efficiency of the human expert.
2. An important part of the climate information system is the need to evaluate the suitability of data and knowledge for a particular application. Therefore, information system design needs to facilitate the evaluation step. The unmet need for evaluation stands as a barrier to delivering the most appropriate and readily usable data for particular purposes.
3. Information systems need to be designed with more attention and focus on classes of users with similar needs. Since the challenges of usability are more about communication than technology, data systems need to focus on effective communication.
4. If climate data are to be made a commodity, far more attention needs to be placed on the connective tissue of usable end-to-end data systems, such as training, example problems, glossaries, walkthroughs, and interface design. Resources to build, maintain and evolve data systems are required.
The complexity of the climate informatics challenges requires community-based approaches. We know that the knowledge necessary for problem solving exists within communities of experts and practitioners, because problems are in fact being solved. In order to allow successful strategies to emerge and organize, it is essential to use the standards and shared services that evolve in communities. In well-governed communities, requirements are exposed and deliberated. Priority should, therefore, be on integrated systems with standard interfaces, services, and shared tools. Widely shared use cases, profiles of user communities (needs, capabilities, knowledge), and a commitment to user-centered design and continual usability evaluation can help promote sharing across community boundaries.
Improvement of data usability requires study of information flow and use: informatics. We see a growing need for climate informatics as a formal field of practice, education, and profession. In general, boundary-spanning expertise that joins domain science knowledge to state-of-the-art understanding of information systems design remains an essential, but neglected element of effective cyberinfrastructure. Climate informatics professionals need to take their place in a community to accelerate emergent and transformational services with attention to the end-to-end system.
Author Bios
Richard Rood is a professor in the Department of Atmospheric, Oceanic and Space Sciences and in the School of Natural Resources and the Environment at the University of Michigan. He teaches a cross-discipline graduate course on climate change, which addresses critical analysis and complex problem-solving. Prior to moving to Michigan in 2005, Rood was a researcher and manager of both scientific and computational organizations at NASA. He writes the climate change blog for Wunderground.com.
Paul N. Edwards is professor in the School of Information and the Department of History at the University of Michigan. He is the author of “A Vast Machine: Computer Models, Climate Data, and the Politics of Global Warming” (MIT Press, 2010), a history of the weather and climate information infrastructure, and co-editor of “Changing the Atmosphere: Expert Knowledge and Environmental Governance” (MIT Press, 2001), as well as other books and numerous articles. Before joining the University of Michigan, he taught at Stanford University and Cornell University. Edwards has held visiting positions at Sciences Po, Paris; Technische Universiteit Eindhoven, Netherlands; the University of Kwazulu-Natal, South Africa; and the University of Melbourne, Australia. He has been a Carnegie Scholar and a Guggenheim Fellow, and he was a co-PI of the Earth System Commodity Governance project.
References
Barsugli, J. J., Guentchev, G., Horton, R., Wood, A., Mearns L. O., Liang, X. Z., Winkler, J., Dixon, K., Hayhoe, K., Rood, R. B., Goddard, L. Ray, A., Buja, L., Ammann, C., The Practitioner’s Dilemma: How to Assess the Credibility of Downscaled Climate Projections, EOS, Trans. Amer. Geophys. Union, 94, 424-425, DOI: 10.1002/2013EO460005, 2013.
Borgman, C. L. Big Data, Little Data, No Data: Scholarship in the Networked World, MIT Press, forthcoming 2014.
Cummings, J., Finholt, T., Foster, I., Kesselman, C., and Lawrence, K. A., Beyond Being There: A Blueprint for Advancing the Design, Development, and Evaluation of Virtual Organizations (NSF Grants 0751539 and 0816932, Office of Cyberinfrastructure), 58 pp., 2008.
http://www.ci.uchicago.edu/events/VirtOrg2008/VO_report.pdf
Diaper, D. and Stanton, N., The Handbook of Task Analysis for Human-Computer Interaction, CRC Press, 568 pp., 2003.
Edwards, P.N., A Vast Machine: Computer Models, Climate Data, and the Politics of Global Warming, MIT Press, 552 pp., 2010.
Edwards, P. N., Jackson, S. J., Bowker, G. C., Knobel, C. P., Understanding Infrastructure: Dynamics, Tensions, and Design, Report of a Workshop on “History and Theory of Infrastructure: Lessons for New Scientific Cyberinfrastructures,” Ann Arbor: Deep Blue, 50 pp., 2007. http://hdl.handle.net/2027.42/49353
Edwards, P. N., M. S. Mayernik, A. L. Batcheller, G. C. Bowker, and C. L. Borgman, Science Friction: Data, Metadata, and Collaboration, Social Studies of Science, 41:5 667–90, 2011
Edwards, P.N., S. J. Jackson, M. Chalmers, G. C. Bowker, C. L. Borgman, D. Ribes, M. Burton, and S. Calvert, Knowledge Infrastructures: Intellectual Frameworks and Research Challenges. Report of a workshop sponsored by the National Science Foundation and the Sloan Foundation, Ann Arbor: Deep Blue, 40 pp., 2013, http://hdl.handle.net/2027.42/97552.
Jin, M., Dickinson, R. E., and Zhang, D-L., The footprint of urban areas on global climate as characterized by MODIS, J. Clim., 18, 1551-1565, 2005
Landauer, Thomas K., The Trouble With Computers: Usefulness, Usability, and Productivity, MIT Press, 440 pp., 1995.
Lemos, M. C., and Morehouse, B. J., The co-production of science and policy in integrated climate assessments, Global Environ Change, 15, 57-68, 2005
Lemos, M. C. and Rood, R. B., Climate Projections and their Impact on Policy and Practice, Wiley Interdisciplinary Reviews: Climate Change, 1, 670-682, DOI: 10.1002/wcc.71, 2010.
Oswald, E. M., Rood, R. B., Zhang, K., Gronland, C. J., O’Neill, M. S., White-Newsome, J. L., Brines, S. J., and Brown, D. G., An investigation into the spatial variability of near-surface air temperatures in the Detroit, MI metropolitan region, J. Appl. Meteorol. Clim., 51, 1290-1304, doi:10.1175/JAMC-D-11-0127.1, 2012.
Overpeck, J. T., Meehl, G. A., Bony, S., Easterling, D. R., Climate data Challenges in the 21st Century, Science, 33, DOI: 10.1126/science.1197869, 700-702, 2011.
National Research Council (NRC), Restructuring Federal Climate Research to Meet the Challenges of Climate Change, ISBN: 0-309-13174-X, 178 pp., 2009.
Nielsen, M. Reinventing Discovery: The New Era of Networked Science, Princeton University Press, 272 pp., 2011.
Norman, D. A., The Design of Everyday Things, Basic Books, 288 pp., 1988.
Rogers, Y., Sharp, H., and Preece, J., Interaction Design: Beyond Human-Computer Interaction, John Wiley & Sons, 602 pp., 2011.
Rosson, M. B., and Carroll J. M., Scenario-Based Design, in A. Sears and J. A. Jacko, eds., Humanâcomputer Interaction: Development Process, CRC Press, 145–62, 2009.
Rosson, M. B., and Carroll, J. M., Usability Engineering: Scenario-Based Development of Human-Computer Interaction, Morgan Kaufmann, 448 pp., 2002.
White-Newsome, J., O’Neill, M. S., Gronlund, C., Sunbury, T. M., Brines, S. J., Parker, E., Brown, D. G., Rood, R. B., and Rivers, Z., Climate Change, Heat Waves, and Environmental Justice: Advancing Knowledge and Action, Environmental Justice, 2, doi:10.1089/env.2009.0032, 2009
Zhang, K., Oswald, E. M., Brown, D. G., Brines, S. J., Gronlund, C. J., White-Newsome, J. L., Rood, R. B., O’ Neill, M. S., Geostatistical Exploration of Spatial Variation of Summertime Temperatures in the Detroit Metropolitan Region, Environ. Res., 111, 1046-1053, 2011.
[6] RBR was a member of the EOSDIS advisory panels.