Spatial econometrics is currently experiencing the Big Data revolution both in terms of the volume of data and the velocity with which they are accumulated. Regional data, employed traditionally in spatial econometric modeling, can be very large, with information that are increasingly available at a very ﬁne resolution level such as census tracts, local markets, town blocks, regular grids or other small partitions of the territory. When dealing with spatial microeconometric models referred to the granular observations of the single economic agent, the number of observations available can be a lot higher. This paper reports the results of a systematic simulation study on the limits of the current methodologies when estimating spatial models with large datasets. In our study we simulate a Spatial Lag Model (SLM), we estimate it using Maximum Likelihood (ML), Two Stages Least Squares (2SLS) andBayesianestimator(B),andwetesttheirperformancesfordiﬀerentsamplesizesanddiﬀerentlevels of sparsity of the weight matrices. We considered three performance indicators, namely: computing time, storage required and accuracy of the estimators. The results show that using standard computer capabilities the analysis becomes prohibitive and unreliable when the sample size is greater than 70,000 evenforlowlevelsofsparsity. Thisresultsuggeststhatnewapproachesshouldbeintroducedtoanalyze the big datasets that are quickly becoming the new standard in spatial econometrics.
|Number of pages||7|
|Journal||Regional Science and Urban Economics|
|Publication status||Published - 2019|
- big spatial data, computational issues, spatial econometric models, maximum likelihood, bayesian estimator, dense matrix, spatial two stage estimator,