In this post, I cover how to do Backtesting in Python using Pandas and the TA-lib. If you are learning Python for finance, I think it pays off to understand how to backtest a strategy without using frameworks.

This will help you understand better how everything works and will provide you with full control on the backtest.

Using libraries such as backtrader or backtesting.py is easier but you loose some control of what is going on and they are less flexible to include AI models. In the long run, I think it pays-off to be able to do backtesting with only a data-frame library such as Pandas or Polars.

## 1. Getting the data from yahoo finance

Here I leverage the yfinance Python package to get price data from yahoo finance. You can change the tickers to get data of other assets. For example, if you want to get bitcoin prices you can use the “BTC-USD” ticker.

```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import plotly.express as px
import yfinance as yf
import talib
# Ticker list to download
tickers = 'SPY QQQ TLT AAPL MSFT GOOG AMZN NFLX NVDA ADBE'
# Download data from Yahoo-Finance
data = yf.download(
tickers = tickers,
period = 'max',
interval = "1d",
ignore_tz=True,
auto_adjust=True, # Adjust all fields by splits and dividends
#group_by='ticker',
)
# get close adjusted prices
data = data['Close']
data = data["20050101":]
data.head()
```

## 2. Compute Daily Returns

The `data`

pandas DataFrame has the adjusted close prices of a list of stocks and ETFs. The idea now is to compute the daily returns of each of these and define the indicators of the strategy.

```
ticker = "SPY"
# Select the close_adj prices
close_adj = data[[ticker]].copy()
close_adj.columns = ['close']
# Compute returns
close_adj['R'] = close_adj.close.pct_change().fillna(0)
close_adj.head()
```

## 3. Trend following strategy

In this section I define how the strategy works. The idea of this strategy is to capture the long term trend a stock has and avoid the largest drops.

The main objective is to accept a small loss of the expected return you would have if you buy-hold the stock while reducing the max-drawdown.

To implement the strategy, I define the two indicators I’ll be using. In this case, I define a slow and a fast moving average. I’ll go long a stock when the `fast_ma`

is higher than the `slow_ma`

, else I’ll be out of the market.

These could be simple moving averages or exponential, it doesn’t matter at this point. You can backtest either and see which one works better later on.

```
# Define indicators
close_adj['slow_ma'] = talib.SMA(close_adj.close, 200)
close_adj['fast_ma'] = talib.SMA(close_adj.close, 10)
# Plot Close and Slow MA
close_adj[['close', 'slow_ma']].plot(logy=True, figsize=(10,4))
```

The slow-ma captures the long term trend of the stock. In this case I used a SMA (simple moving average) of 200 trading days.

`close_adj[['slow_ma', 'fast_ma']].plot(logy=True, figsize=(10,4))`

This plot is helpful to get an intuition of the strategy. We will go long when the fast_ma is higher than the slow_ma and we will be out of the market in any other case.

From looking at this chart, it seems the is able to generate a sell signal prior to the 2008 crisis.

## 4. Computing the signal

The `get_signal`

function implement the strategy we discussed. The new section of the code implements the `signal`

column where we assign a 1 if the fast_ma is higher than the slow_ma, else a 0 that means we stay in cash.

As we get this signal after the trading day, we need to shift it one observation so the signal in (t-1) is applied to Returns in (t).

```
def get_signal(data, ticker, fast_ma, slow_ma):
close_adj = data[[ticker]].copy()
close_adj.columns = ['close']
# Compute Returns
close_adj['R'] = close_adj.close.pct_change().fillna(0)
# Define indicators - changed to EMA
close_adj['fast_ma'] = talib.EMA(close_adj.close, fast_ma)
close_adj['slow_ma'] = talib.EMA(close_adj.close, slow_ma)
close_adj = close_adj[~close_adj.slow_ma.isnull()]
# Define signal 1=long, 0=cash
close_adj = close_adj.assign(
signal = lambda x: np.where(x.fast_ma > x.slow_ma, 1, 0)
)
# We need to take a lag the signal
# So signal(t-1) is applied to Returns(t)
close_adj['signal'] = close_adj['signal'].shift(1, fill_value=0)
close_adj['R_strategy'] = close_adj.R * close_adj.signal
return close_adj
```

The returns calculation is done when the close_adj.R column is multiplied to the close_adj.signal column. As this is done using “vectors”, meaning there are no for loops, it’s very fast. This is why the implementation is considered “vectorized”.

Here is an example on how to call the `get_signal`

function.

```
ticker = 'SPY'
df_signal = get_signal(data, ticker, fast_ma=10, slow_ma=65)
df_signal.head()
```

Now we have the stock returns (R) and the strategy returns (R_strategy). We have everything we need to compute the performance of this strategy.

But before that, I’ll plot the cumulative returns of the strategy. The R line represents the Buy & Hold strategy of this ticker while the R_strategy line represents the cumulative returns of the strategy.

`px.line(100 * (1 + df_signal[['R', "R_strategy"]]).cumprod(), title='Total Return')`

Here we can get some intuition of how the strategy works in a crisis. Note it avoided a large portion of the 2008 crisis. However, with these parameters it wasn’t able to react on time in other crisis that happened faster such as the covid crash in 2020. It still was able to avoid part of the crash and get back into the market when it recovered.

## 5. Computing the strategy performance

```
def performance(df_signal, ticker, freq='M', risk_free_rate=0.02):
rets = df_signal[['R', "R_strategy"]].copy()
rets.columns = ["Buy & Hold", "Strategy"]
if freq == 'D':
scale = 252
elif freq == 'M':
scale = 12
rets = rets.resample(freq).agg(lambda x: (1 + x).prod() - 1)
else:
return None
# Compute Results
ret_cumulative = (1 + rets).cumprod()
previous_peaks = ret_cumulative.cummax()
drawdown = (ret_cumulative - previous_peaks) / previous_peaks
# compute annualized returns and risk
annualized_returns = (1 + rets.mean())**scale - 1
annualied_std_deviation = rets.std() * np.sqrt(scale)
max_drawdown = drawdown.min() * -1
df_risk_return = pd.DataFrame(
dict(
ticker = ticker,
annualized_returns=annualized_returns,
annualied_std_deviation=annualied_std_deviation,
)
)
df_risk_return["Max Drawdown"] = drawdown.min() * -1
# Compute Sharpe Ratio
df_risk_return = df_risk_return.assign(
sharpe_ratio=lambda x: (x.annualized_returns-risk_free_rate)/x.annualied_std_deviation,
calmar_ratio=lambda x: (x.annualized_returns)/x['Max Drawdown'],
)
return df_risk_return
# Example
performance(df_signal, ticker, freq='D', risk_free_rate=0.02)
```

The `performance`

function computes how our strategy performs. I compute the expected annualized returns of the strategy, it’s risk (std-deviation of returns), Max-Drawdown, Sharpe Ratio and Calmar Ratio.

If you are not familiar with these concepts, check out this YouTube playlist where I cover them in more detail.

## 6. Run Backtest

Now that we have defined the signals and performance calculation we can run our backtest over multiple parameters.

This will provide some insight into which parameters are best for our given strategy.

```
results = []
for fast_ma in range(5, 50, 1):
for slow_ma in range(30, 200, 1):
if fast_ma >= slow_ma:
continue
else:
# compute signal
df_signal = get_signal(data, ticker, fast_ma, slow_ma)
# compute performance
perf = performance(df_signal, ticker, freq='D', risk_free_rate=0.02)
perf['fast_ma'] = fast_ma
perf['slow_ma'] = slow_ma
results.append( perf.tail(1) )
df_res = pd.concat(results)
```

After running the backtest I can sort the results using the metrics I defined. If I sort by Max-Drawdown I get parameters that seem to be getting me out of large drops in the market.

However, this generally implies a drop in the expected return, 6.9% in this case.

`df_res.sort_values("Max Drawdown", ascending=True).head()`

## 7. Heatmap of Parameter Grid Search

After running the grid-search we can plot a heatmap with the performance for each parameter configuration.

This helps give an intuition of what we can expect with this simple strategy over a long period of time.

```
eval_metric = 'sharpe_ratio'
df_mat = df_res.pivot(index='fast_ma', columns='slow_ma', values=eval_metric)
fig = px.imshow(df_mat,
color_continuous_scale='RdYlGn',
aspect='auto')
fig.update_layout(
title="Backtest Results - Sharpe Ratio",
xaxis_title='Slow MA',
yaxis_title='Fast MA',
)
fig
```

It’s important to understand we shouldn’t mechanically expect the same results we got as “best” in the grid search.

All this backtest tells us, is which parameters did better IN THE PAST. This doesn’t mean this will repeat in the future.

Even though these results are limited, in some cases this information is useful and it’s clearly better than guessing or “investing” by intuition that is what most people do.

I hope you enjoyed the post. If you prefer a video version, check out this video tutorial.

## Leave a Reply