Abstract
Calibrating agent-based models (ABMs) in economics and finance typically involves a derivative-free search in a very large parameter space. In this work, we benchmark a number of search methods in the calibration of a well-known macroeconomic ABM on real data, and further assess the performance of "mixed strategies"made by combining different methods. We find that methods based on random-forest surrogates are particularly efficient, and that combining search methods generally increases performance since the biases of any single method are mitigated. Moving from these observations, we propose a reinforcement learning (RL) scheme to automatically select and combine search methods on-the-fly during a calibration run. The RL agent keeps exploiting a specific method only as long as this keeps performing well, but explores new strategies when the specific method reaches a performance plateau. The resulting RL search scheme outperforms any other method or method combination tested, and does not rely on any prior information or trial and error procedure.
Lingua originale | English |
---|---|
Titolo della pubblicazione ospite | Proceedings of the 4th ACM International Conference on AI in Finance (ICAIF ’23) |
Pagine | 305-313 |
Numero di pagine | 9 |
Volume | Proceedings of the 4th ACM International Conference on AI in Finance (ICAIF ’23) |
DOI | |
Stato di pubblicazione | Pubblicato - 2023 |
Evento | 4th ACM International Conference on AI in Finance, ICAIF 2023 - NEW YORK Durata: 27 nov 2023 → 29 nov 2023 |
Convegno
Convegno | 4th ACM International Conference on AI in Finance, ICAIF 2023 |
---|---|
Città | NEW YORK |
Periodo | 27/11/23 → 29/11/23 |
Keywords
- agent-based modelling
- reinforcement learning
- planning under uncertainty
- model calibration