Every day insurance companies collect an enormous quantity of text data from multiple sources. By exploiting Natural Language Processing, we present a strategy to make beneficial use of the large information available in documents. After a brief review of the basics of text mining, we describe a case study where, by analyzing the accident narratives written by the researchers of the National Highway Traffic Safety Administration (NHTSA) of the U. S. Department of Transportation, we aim at grasping latent information useful to fine-tune policy premiums. The process is based on two steps. First, we classify the reports according to the relevance of their content to find the risk profile of the people involved. Next we use these profiles to add new latent risk covariates for the ratemaking process of the customers of a company.
|Number of pages||15|
|Publication status||Published - 2019|
- Natural language processing
- Policy premiums
- Text mining