Abstract
The abundance of provision of environmental data and their diffusion on the Internet through\r\nidiosyncratic methods without unified standards for disclosure, has brought about a situation in\r\nwhich data is available but difficult to aggregate, synthesize and interpret. This article explores the\r\nroots and implications of practices of “scraping”, i.e. automatic unauthorized collection of data\r\npublished on the web, enacted by public and private subjects for the purposes of sustainability.\r\nDrawing from the concept of ‘datascape’ to describe the overall socio-technical environment this\r\ndata circulates, the paper explores two case studies. The first, EDGI/DataRefuge, deals with a\r\nsystematic attempt to collect and preserve environmental data and documents published by environmental\r\nmanagement agencies, which is subject of cancellation by US Government policies.\r\nThe second case, WorldAQI, examines a platform collecting, refining and publishing on maps the\r\nair quality indexes of hundreds of countries in the world. The first case allows us to see the human\r\ncomponent of large-scale web scraping efforts of highly heterogeneous data, which highlights\r\nthe need for resources. The second case highlights how the processes of collation, formatting and\r\nnormalization of heterogeneous data to maximize readability have implications for data quality\r\nand representativeness. In conclusion, we observe how through data scraping stakeholders can\r\nenhance the spatial and temporal comparability of data and provide new avenues for public participation\r\ninto complex decision-making processes.
Original language | English |
---|---|
Pages (from-to) | 346-358 |
Number of pages | 13 |
Journal | Comunicazioni Sociali |
Issue number | 3 |
Publication status | Published - 2020 |
All Science Journal Classification (ASJC) codes
- Cultural Studies
- Communication
- Visual Arts and Performing Arts
Keywords
- BIG DATA
- DIGITAL MEDIA
- ENVIRONMENT
- NGO
- SCRAPING
- SOFTWARE