Abstract
Calibrating agent-based models (ABMs) in economics and finance typically involves a derivative-free search in a very large parameter space. In this work, we benchmark a number of search methods in the calibration of a well-known macroeconomic ABM on real data, and further assess the performance of "mixed strategies"made by combining different methods. We find that methods based on random-forest surrogates are particularly efficient, and that combining search methods generally increases performance since the biases of any single method are mitigated. Moving from these observations, we propose a reinforcement learning (RL) scheme to automatically select and combine search methods on-the-fly during a calibration run. The RL agent keeps exploiting a specific method only as long as this keeps performing well, but explores new strategies when the specific method reaches a performance plateau. The resulting RL search scheme outperforms any other method or method combination tested, and does not rely on any prior information or trial and error procedure.
Original language | English |
---|---|
Title of host publication | Proceedings of the 4th ACM International Conference on AI in Finance (ICAIF ’23) |
Pages | 305-313 |
Number of pages | 9 |
Volume | Proceedings of the 4th ACM International Conference on AI in Finance (ICAIF ’23) |
DOIs | |
Publication status | Published - 2023 |
Event | 4th ACM International Conference on AI in Finance, ICAIF 2023 - NEW YORK Duration: 27 Nov 2023 → 29 Nov 2023 |
Conference
Conference | 4th ACM International Conference on AI in Finance, ICAIF 2023 |
---|---|
City | NEW YORK |
Period | 27/11/23 → 29/11/23 |
Keywords
- agent-based modelling
- reinforcement learning
- planning under uncertainty
- model calibration