If you have ever wondered how to make smarter predictions or handle uncertainty in your data,...
Confidence Intervals: Transforming Your Planning Process using Monte Carlo Simulation
Introduction
When it comes to financial planning, most of us default to presenting a single number for net income—a definitive figure that reflects our projections. However, this approach often lacks nuance, leaving no room for uncertainty or variability. Instead, we might want to consider providing confidence intervals for our planned net income taking into account uncertainty.
By incorporating confidence intervals into your planning process, you’ll gain a broader understanding of potential outcomes, enabling better decision-making. Confidence intervals offer a range of possibilities, allowing you to factor in uncertainties and present a more resilient plan. Let’s explore why this matters and how you can integrate this method into your business strategy.
Why Confidence Intervals Are Key to Better Planning
Have you ever wondered why planning with confidence intervals is worth your attention? Confidence intervals provide a range of potential outcomes, offering insights into the likelihood of achieving specific results.
For instance, instead of stating, “We expect a net income of $1 million,” you might say, “There’s a 90% probability our net income will fall between $0.75 million and $1.25 million or there is a 5% probability that the net income will be below $0.75 million.” This range paints a more complete picture, highlighting the risks and opportunities embedded in your projections.
Benefits of Confidence Intervals:
- Improved decision-making: By visualising the range of possibilities, you’re equipped to make more informed choices.
- Enhanced stakeholder trust: Confidence intervals show you’ve considered uncertainties, which adds credibility.
- Risk management: Identifying variability helps you prepare for both best- and worst-case scenarios.
What You Need to Consider for Confidence Interval Planning
To effectively incorporate confidence intervals, you need a clear understanding of how to model scenarios and their probability distributions. Start by identifying key inputs such as revenue growth, costs, and market conditions. Then, consider the following factors:
- Distribution of Inputs: Are your variables normally distributed, skewed, or subject to extremes? For example, sales growth may follow a log-normal distribution rather than a normal one (assuming it is driven by multiplicative or percentage-based influences—where each period’s sales are a product of prior sales and a random growth factor) or insurance claims might rather follow as right skewed lognormal distribution modelling rare severe losses.
- Boundary Conditions: Define assumptions and constraints. For example, if you’re forecasting net income, ensure growth rates don’t exceed historical benchmarks without justification. Review the range in which your data points (e.g. growth factors) reside.
- Scenario Modeling: Use simulation techniques, such as Monte Carlo, to generate thousands of possible outcomes. Each iteration samples from the distribution of inputs, creating a rich dataset of results.
Example: Assume you have historical sales data showing a mean of 20,000 units sold (selling price 60 fixed) with a standard deviation of 3,000 units. Costs are estimated at $10 per unit with a standard deviation of 1. By inputting these variables into a Monte Carlo simulation and running 10,000 iterations, the simulation outputs a range of possible net incomes. For instance, the simulation might reveal that there’s a 90% probability the net income falls between $0.75 million and $1.25 million. Here’s a graph summarising key results:
Below is the histogram showing the frequency distribution of net income outcomes based on the Monte Carlo simulation (code below):
import numpy as np
import matplotlib.pyplot as plt
# Parameters for simulation
mean_sales_volume = 20000 # Mean sales volume
std_dev_sales_volume = 3000 # Standard deviation of sales volume
selling_price = 60 # Fixed selling price per unit
mean_unit_cost = 10 # Mean unit cost per unit
std_dev_unit_cost = 1 # Standard deviation of unit cost per unit
iterations = 10000 # Number of Monte Carlo iterations
# Generate random sales volumes and unit costs
sales_volumes = np.random.normal(mean_sales_volume, std_dev_sales_volume, iterations)
unit_costs = np.random.normal(mean_unit_cost, std_dev_unit_cost, iterations)
# Calculate profits for each iteration
profits = sales_volumes * (selling_price - unit_costs)
# Calculate key statistics
median_profit = np.median(profits)
std_dev_profit = np.std(profits)
lower_bound = np.percentile(profits, 5) # 5th percentile
upper_bound = np.percentile(profits, 95) # 95th percentile
# Print results
results = {
"Median Profit": median_profit,
"Standard Deviation of Profit": std_dev_profit,
"90% Confidence Interval": (lower_bound, upper_bound)
}
# Plot the profit distribution
plt.hist(profits, bins=50, edgecolor='black', alpha=0.7)
plt.title('Simulated Profit Distribution')
plt.xlabel('Profit ($)')
plt.ylabel('Frequency')
plt.axvline(lower_bound, color='red', linestyle='dashed', linewidth=1, label=f'5th Percentile: ${lower_bound:,.2f}')
plt.axvline(upper_bound, color='green', linestyle='dashed', linewidth=1, label=f'95th Percentile: ${upper_bound:,.2f}')
plt.axvline(median_profit, color='blue', linestyle='dashed', linewidth=1, label=f'Median: ${median_profit:,.2f}')
plt.legend()
plt.show()
Financial modelling
When modelling future scenarios, it’s essential to take into account historical data, economic indicators, and market assumptions to derive potential future outcomes. While normal distribution is commonly used, it may not always be the best fit. The choice of a distribution (i.e. reflecting growth rates) for modelling potential outcomes depends on the underlying business and its unique characteristics - the table below shows a few examples:
Category | Distribution | Factors to Consider |
---|---|---|
General | Growth rates often follow a normal distribution; absolute values (e.g., total revenue) may show skewness due to rare, high-revenue periods. | - Adjust for seasonality and shocks (e.g., travel bans). - Use Student’s t-distribution for heavy-tailed events like economic downturns. |
Sports Retailers | Normal or slightly heavy-tailed for growth rates. | - Account for outliers caused by major events or product launches. - Use mixture models when appropriate. |
Insurance Claims | Normal for aggregate claim growth, but catastrophic events require heavier-tailed models. | - Separate modelling for claim frequency and severity. - Use specialised distributions for catastrophic events (i.e. Poisson, Binomial). |
As best practice you might want to follow below steps to avoid overfitting your model:
- Detrend and Deseasonalize Data: Remove seasonal variations (e.g., holiday sales, weather impacts) to uncover consistent patterns.
- Choose Initial Distributions: Base the initial distributions on historical data.
- Validate Models: Use goodness-of-fit tests.
- Adjust for Outliers and Heavy Tails: Incorporate appropriate distributions to handle extreme events (exceeding a threshold).
Switching Context: A Profit & Loss Statement Example for an Insurance Company
To express uncertainty in an IFRS 17 income (profit & loss) statement, we can follow a structured approach, incorporating an additional step (Step b) to enhance our methodology:
Step | Description | Details |
---|---|---|
a. Key Uncertain Inputs | Identify inputs that drive uncertainty in cashflows |
|
b. Building the Simulation Model | Simulate multiple scenarios for cashflows and subsequently apply IFRS 17 logic |
|
c. Analysing Results | Summarise and extract insights from the simulation results | - Obtain a distribution of net income outcomes. - Extract probabilistic insights, such as confidence intervals and thresholds. |
At this point, you might wonder: Why not vary net income directly instead of simulating cashflow inputs and running them through the IFRS 17 model?
While it is theoretically possible to impose a probability distribution directly on key outputs (e.g., CSM release, RA release, loss on new business, and net income), this shortcut has significant limitations:
Why Simulating Cashflows is More Robust Than Directly Modeling Net Income
Preserving Causal Integrity
IFRS 17 follows a structured chain of calculations, starting from premiums, claims, expenses, and discounting, leading to the final net income components. Skipping directly to output distributions severs these logical links, risking unrealistic and inconsistent results that do not align with the underlying contract data and assumptions.
Understanding Sensitivities and Risk Drivers
By modelling uncertain inputs and processing them through IFRS 17 calculations, you can trace variations in net income back to specific risk drivers (e.g., changes in claim frequency, premium volumes, or economic factors). If you manipulate net income directly, you lose visibility into which factors are driving uncertainty and to what extent.
Asymmetric treatment of profit and loss
IFRS 17 calculations require sophisticated non-linear modelling due to the asymmetric treatment of profit and loss, where onerous contracts trigger immediate recognition of the full loss component while profitable contracts release the Contractual Service Margin gradually over the coverage period. This timing difference creates distinct "tipping points" in net income predictions when cohorts transition to onerous status.
Practical Considerations
• Computational Complexity: Step b. (Building the Simulation Model) may require substantial computing power, particularly when modelling large datasets or complex cashflow patterns. To optimise performance, it is good practice to review Step a. outputs in advance and anticipate how different cashflow scenarios may affect net income in Step c. This helps with early sense-checking before running full simulations.
• Layered Stress Testing: Applying multiple layers of stresses to cashflows—such as separate shocks for claims inflation, lapse rates, or discount rates—allows for a more granular understanding of their individual and combined impacts. This structured approach enhances risk assessment and decision-making.
Conclusion
Confidence intervals offer a transformative way to approach planning, moving beyond static figures to embrace the complexities of real-world variability. By incorporating tools like Monte Carlo simulations and focusing on distribution-based modelling, you’ll enhance your decision-making, build stakeholder trust, and better manage risks.
Next Steps: Ready to dive deeper? Explore tools and resources that simplify confidence interval modelling, such as statistical software, Monte Carlo platforms, and IFRS 17 guidance documents. Start experimenting with confidence intervals in your business planning today.
Alberto Desiderio is deeply passionate about data analytics, particularly in the contexts of financial investment, sports, and geospatial data. He thrives on projects that blend these domains, uncovering insights that drive smarter financial decisions, optimise athletic performance, or reveal geographic trends.