6. Practitioner comments
This section contains general advice and guidance we received from practitioners in governments, central banks, and private sector forecast groups that may be useful in choosing a forecasting model and creating a forecasting workflow.
The most commonly used modelling approach is a suite of models that includes a reduced-form macroeconomic model and smaller auxiliary models to provide greater detail of tax bases. In the UK, institutions rely most commonly on error-correction models for the housing tax base. The results of the auxiliary models are taken exogenously into the macroeconomic model (the macro model is told to ignore its own housing equations and take the auxiliary results in their place).
The outputs of the economic model are then sent to policy teams for estimating the cost of new tax measures and to fiscal forecasting teams for updating fiscal forecasts. Forecasting teams produce a pre-policy baseline to which costings are imposed to arrive at a post-measures outlook. Fiscal forecasts are then sent back to the macroeconomic modelling team for iteration. There are often challenge meetings between fiscal forecasters and economic forecasters at this stage to ensure that revenue forecasts are consistent with macroeconomic developments.
Microsimulation models and micro databases are the main way that government policy teams translate forecasts of the housing tax base into forecasts of revenues.
Roughly half of the forecasters had a DSGE model that was used for simulation and as a challenge to the other modelling results. Many government departments had specialty research units that provided in-depth housing research that was used to challenge the forecasts of the main models. Practitioners suggested that DSGE modelling is less relevant for forecasting, especially the housing tax base for LBTT. They suggested it could, however, be used for analysis such as examining the impulse response of an LBTT holiday on household decisions and economic activity.
All practitioners we interviewed emphasized the role of incorporating expert judgment in the forecast. This was said to be particularly important for the first two quarters to help smooth the transition from historical data to the pure model result, or to incorporate monitoring data (more recent releases and peripheral data releases that may be on a higher frequency than the quarterly forecast data). Often national accounts data conflicts with tax data, and given the former may tend to be revised and the latter are more firmly grounded in actuals, recent national accounts quarters may be adjusted. Judgment also enters during the challenge meetings, where the narrative between macro forecasters is squared with the narrative from the tax forecasters. Judgment was also applied to bring extreme forecasts closer to market or consensus expectations.
Expert judgment tended to refer to the judgment of those with domain knowledge of economic forecasting and tax administration. While we were not informed of any real estate specialists in national government forecasting units, they were included on the housing teams of private sector bank economics departments and a subnational government. The latter expressed the importance of an industry expert with deep knowledge of the real estate market. The same subnational government had legal professionals formerly in the real estate industry whose expertise was valued. Most practitioners did, however, engage regularly with the real estate business community.
One government modelling group valued its close relationship with the public mortgage insurance provider, allowing for much richer borrower-level data for policy analysis such as modelling changes to mortgage rules.
Team sizes in forecasting departments responsible for modelling the housing sector vary between one and six analysts (one senior and the rest juniors), with roles shared between housing research and other macroeconomic forecasting and special issues analysis. Typically, different teams are responsible for forecasting the housing market tax base than teams responsible for applying the tax structure to the base.
Practitioners were divided roughly evenly between three schedules for updating a model's data: 1) quarterly, 2) as soon as data is released, and 3) during the forecasting rounds for document production (for example, finance departments often update data during the run-up to spring and fall budget statements).
Parameters are re-estimated generally once a year (for example, with the release of the Blue Book in the UK) to every two years. Re-estimation is rarely left longer than two years; however, the updates depend on analyst turnover and expertise. Some teams let the data dictate-any time there is a substantial revision in the accounting procedures of the economic accounts, for example. Extensive model evaluation and development is carried out infrequently. Some departments have internal or external evaluation committees that review the forecast every three to 10 years.
HMT and the OBR's macro model uses the WinSolve software suite, provided for free online.  Other practitioners listed most commonly Eviews (for its ease of handling time series data), followed by Stata. Matlab is a popular choice for DSGE modelling and Dynamic Factor Modelling. Departments with access to taxpayer data use either Microsoft Access or SQL databases for retrieving records for analysis. Microsimulation models are built as proprietary standalone software applications. Spreadsheets are used by all practitioners, particularly for auxiliary models.
Models most commonly used data at the quarterly frequency. Practitioners reported that quarterly modelling is able to capture seasonality and behavioural responses to policy changes that could be lost in annual data. Monthly data could be considered for models that require more observations, but is available for only a subset of variables. Practitioners using quarterly models often built up short-run forecasts using monthly data such as RICS, Nationwide, and Halifax indexes. Practitioners flagged the problem of identifying appropriate price indexes to use as the average house price for the dependent variables in modelling, with several different options available, each with different strengths and weaknesses.
Practitioners were split on the importance of forecast assessments and error decompositions. Roughly half do both, while the other half do not perform forecast assessments. Some suggested that accuracy in outer years is not a high concern and assessments are problematic, even if undertaken. Problems with forecast error evaluation include: changes in accounting methods (national accounts and public accounts ( IFRS or GAAP accruals standards for tax revenues, for example); difficulty controlling for policy initiatives, which are themselves forecasts with large uncertainty (and often with no history to rely upon); and uncertainty and errors introduced from the forecasts of exogenous variables (for example, economic determinants such as household incomes and employment). One example of the difficulty of forecast evaluation was that a forecast's accuracy and influence itself could lead to forecast errors-for example, if a weakening housing market is forecast, the government may implement fiscal stimulus (such as the Stamp Duty Land Tax holiday following the financial crisis). The impact of these influences on the forecast error is difficult or impossible to calculate, although some try. 
Email: Jamie Hamilton
Phone: 0300 244 4000 – Central Enquiry Unit
The Scottish Government
St Andrew's House