Abstract
The abundance of provision of environmental data and their diffusion on the Internet through
idiosyncratic methods without unified standards for disclosure, has brought about a situation in
which data is available but difficult to aggregate, synthesize and interpret. This article explores the
roots and implications of practices of “scraping”, i.e. automatic unauthorized collection of data
published on the web, enacted by public and private subjects for the purposes of sustainability.
Drawing from the concept of ‘datascape’ to describe the overall socio-technical environment this
data circulates, the paper explores two case studies. The first, EDGI/DataRefuge, deals with a
systematic attempt to collect and preserve environmental data and documents published by environmental
management agencies, which is subject of cancellation by US Government policies.
The second case, WorldAQI, examines a platform collecting, refining and publishing on maps the
air quality indexes of hundreds of countries in the world. The first case allows us to see the human
component of large-scale web scraping efforts of highly heterogeneous data, which highlights
the need for resources. The second case highlights how the processes of collation, formatting and
normalization of heterogeneous data to maximize readability have implications for data quality
and representativeness. In conclusion, we observe how through data scraping stakeholders can
enhance the spatial and temporal comparability of data and provide new avenues for public participation
into complex decision-making processes.
Lingua originale | English |
---|---|
pagine (da-a) | 346-358 |
Numero di pagine | 13 |
Rivista | Comunicazioni Sociali |
Stato di pubblicazione | Pubblicato - 2020 |
Keywords
- BIG DATA
- DIGITAL MEDIA
- ENVIRONMENT
- NGO
- SCRAPING
- SOFTWARE