Friday, 29 January 2016

Correlations, Weights, Multipliers.... (pysystemtrade)

This post serves three main purposes:

Firstly, I'm going to explain the main features I've just added to my python back-testing package pysystemtrade; namely the ability to estimate parameters that were fixed before: forecast and instrument weights; plus forecast and instrument diversification multipliers.

(See here for a full list of what's in version 0.2.1)

Secondly I'll be illustrating how we'd go about calibrating a trading system (such as the one in chapter 15 of my book); actually estimating some forecast weights and instrument weights in practice. I know that some readers have struggled with understanding this (which is of course entirely my fault).

Thirdly there are some useful bits of general advice that will interest everyone who cares about practical portfolio optimisation (including both non users of pysystemtrade, and non readers of the book alike). In particular I'll talk about how to deal with missing markets, the best way to estimate portfolio statistics, pooling information across markets, and generally continue my discussion about using different methods for optimising (see here, and also here).

If you want to, you can follow along with the code, here.


Key


This is python:

system.forecastScaleCap.get_scaled_forecast("EDOLLAR", "carry").plot()


This is python output:

hello world

This is an extract from a pysystemtrade YAML configuration file:

forecast_weight_estimate:
   date_method: expanding ## other options: in_sample, rolling
   rollyears: 20

   frequency: "W" ## other options: D, M, Y

Forecast weights


A quick recap



The story so far; we have some trading rules (three variations of the EWMAC trend following rule, and a carry rule); which we're running over six instruments (Eurodollar, US 10 year bond futures, Eurostoxx, MXP USD fx, Corn, and European equity vol; V2X).

We've scaled these (as discussed in my previous post) so they have the correct scaling. So both these things are on the same scale:

system.forecastScaleCap.get_scaled_forecast("EDOLLAR", "carry").plot()

Rolldown on STIR usually positive. Notice the interest rate cycle.

system.forecastScaleCap.get_scaled_forecast("V2X", "ewmac64_256").plot()

Notice how we moved from 'risk on' to 'risk off' in early 2015

Notice the massive difference in available data - I'll come back to this problem later.
 
However having multiple forecasts isn't much good; we need to combine them (chapter 8). So we need some forecast weights. This is a portfolio optimisation problem. To be precise we want the best portfolio built out of things like these:
Account curves for trading rule variations, US 10 year bond future. All pretty good....


There are some issues here then which we need to address.

An alternative which has been suggested to me is to optimise the moving average rules seperately; and then as a second stage optimise the moving average group and the carry rule. This is similar in spirit to the handcrafted method I cover in my book. Whilst it's a valid approach it's not one I cover here, nor is it implemented in my code.


In or out of sample?


Personally I'm a big fan of expanding windows (see chapter 3, and also here)
nevertheless feel free to try different options by changing the configuration file elements shown here.

forecast_weight_estimate:
   date_method: expanding ## other options: in_sample, rolling
   rollyears: 20

   frequency: "W" ## other options: D, M, Y
Also the default is to use weekly returns for optimisation. This has two advantages; firstly it's faster. Secondly correlations of daily returns tend to be unrealistically low (because for example of different market closes when working across instruments).


Choose your weapon: Shrinkage, bootstrapping or one-shot?


In my last couple of posts on this subject I discussed which methods one should for optimisation (see here, and also here, and also chapter four).

I won't reiterate the discussion here in detail, but I'll explain how to configure each option.

Boostrapping

This is my favourite weapon, but it's a little ..... slow.


forecast_weight_estimate:
   method: bootstrap
   monte_runs: 100
   bootstrap_length: 50
   equalise_means: True
   equalise_vols: True



We expect our trading rule p&l to have the same standard deviation of returns, so we shouldn't need to equalise vols; it's a moot point whether we do or not. Equalising means will generally make things more robust. With more bootstrap runs, and perhaps a longer length, you'll get more stable weights.

Shrinkage


I'm not massively keen on shrinkage (see here, and also here) but it is much quicker than bootstrapping. So a good work flow might be to play around with a model using shrinkage estimation, and then for your final run use bootstrapping. It's for this reason that the pre-baked system defaults to using shrinkage. As the defaults below show I recommend shrinking the mean much more than the correlation.


forecast_weight_estimate:
   method: shrinkage
   shrinkage_SR: 0.90
   shrinkage_corr: 0.50
   equalise_vols: True


Single period


Don't do it. If you must do it then I suggest equalising the means, so the result isn't completely crazy.

forecast_weight_estimate:
   method: one_period
   equalise_means: True
   equalise_vols: True




To pool or not to pool... that is a very good question



One question we should address is, do we need different forecast weights for different instruments, or can we pool our data and estimate them together? Or to put it another way does Corn behave sufficiently like Eurodollar to justify giving them the same blend of trading rules, and hence the same forecast
weights?

forecast_weight_estimate:
   pool_instruments: True ##

One very significant factor in making this decision is actually costs. However I haven't yet included the code to calculate the effect of these. For the time being then we'll ignore this; though it does have a significant effect. Because of the choice of three slower EWMAC rule variations this omission isn't as serious as it would be with faster trading rules.

If you use a stupid method like one-shot then you probably will get quite different weights. However more sensible methods will account better for the noise in each instruments' estimate.

With only six instruments, and without costs, there isn't really enough information to determine whether pooling is a good thing or not. My strong prior is to assume that it is. Just for fun here are some estimates without pooling.

system.config.forecast_weight_estimate["pool_instruments"]=False
system.config.instrument_weight_estimate["method"]="bootstrap"
system.config.instrument_weight_estimate["equalise_means"]=False
system.config.instrument_weight_estimate["monte_runs"]=200
system.config.instrument_weight_estimate["bootstrap_length"]=104

system=futures_system(config=system.config)

system.combForecast.get_forecast_weights("CORN").plot()
title("CORN")
show()









Forecast weights for corn, no pooling

system.combForecast.get_forecast_weights("EDOLLAR").plot()
title("EDOLLAR")
show()



Forecast weights for eurodollar, no pooling

Note: Only instruments that share the same set of trading rule variations will see their results pooled.
 

Estimating statistics


There are also configuration options for the statistical estimates used in the optimisation; so for example should we use exponential weighted estimates? (this makes no sense for bootstrapping, but for other methods is a reasonable thing to do). Is there a minimum number of data points before we're happy with our estimate? Should we floor correlations at zero (short answer - yes).


forecast_weight_estimate:
 

   correlation_estimate:
     func: syscore.correlations.correlation_single_period
     using_exponent: False
     ew_lookback: 500
     min_periods: 20     
     floor_at_zero: True

   mean_estimate:
     func: syscore.algos.mean_estimator
     using_exponent: False
     ew_lookback: 500
     min_periods: 20     

   vol_estimate:
     func: syscore.algos.vol_estimator
     using_exponent: False
     ew_lookback: 500
     min_periods: 20     


Checking my intuition


Here's what we get when we actually run everything with some sensible parameters:

system=futures_system()
system.config.forecast_weight_estimate["pool_instruments"]=True
system.config.forecast_weight_estimate["method"]="bootstrap" 
system.config.forecast_weight_estimate["equalise_means"]=False
system.config.forecast_weight_estimate["monte_runs"]=200
system.config.forecast_weight_estimate["bootstrap_length"]=104


system=futures_system(config=system.config)

system.combForecast.get_raw_forecast_weights("CORN").plot()
title("CORN")
show()

Raw forecast weights pooled across instruments. Bumpy ride.
 Although I've plotted these for corn, they will be the same across all instruments. Almost half the weight goes in carry; makes sense since this is relatively uncorrelated (half is what my simple optimisation method - handcrafting - would put in). Hardly any (about 10%) goes into the medium speed trend following rule; it is highly correlated with the other two rules. Out of the remaining variations the faster one gets a higher weight; this is the law of active management at play I guess.

Smooth operator - how not to incur costs changing weights


Notice how jagged the lines above are. That's because I'm estimating weights annually. This is kind of silly; I don't really have tons more information after 12 months; the forecast weights are estimates - which is a posh way of saying they are guesses. There's no point incurring trading costs when we update these with another year of data.

The solution is to apply a smooth

forecast_weight_estimate:
   ewma_span: 125
   cleaning: True


Now if we plot forecast_weights, rather than the raw version, we get this:

system.combForecast.get_forecast_weights("CORN").plot()
title("CORN")
show()



Smoothed forecast weights (pooled across all instruments)
There's still some movement; but any turnover from changing these parameters will be swamped by the trading the rest of the system is doing.



Forecast diversification multiplier


Now we have some weights we need to estimate the forecast diversification multiplier; so that our portfolio of forecasts has the right scale (an average absolute value of 10 is my own preference).


Correlations


First we need to get some correlations. The more correlated the forecasts are, the lower the multiplier will be. As you can see from the config options we again have the option of pooling our correlation estimates.


forecast_correlation_estimate:
   pool_instruments: True 

   func: syscore.correlations.CorrelationEstimator ## function to use for estimation. This handles both pooled and non pooled data
   frequency: "W"   # frequency to downsample to before estimating correlations
   date_method: "expanding" # what kind of window to use in backtest
   using_exponent: True  # use an exponentially weighted correlation, or all the values equally
   ew_lookback: 250 ## lookback when using exponential weighting
   min_periods: 20  # min_periods, used for both exponential, and non exponential weighting





Smoothing, again


We estimate correlations, and weights, annually. Thus as with weightings it's prudent to apply a smooth to the multiplier. I also floor negative correlations to avoid getting very large values for the multiplier.


forecast_div_mult_estimate:
   ewma_span: 125   ## smooth to apply
   floor_at_zero: True ## floor negative correlations


system.combForecast.get_forecast_diversification_multiplier("EDOLLAR").plot()
show()




system.combForecast.get_forecast_diversification_multiplier("V2X").plot()
show()

Forecast Div. Multiplier for Eurodollar futures
Notice that when we don't have sufficient data to calculate correlations, or weights, the FDM comes out with a value of 1.0. I'll discuss this more below in "dealing with incomplete data".


From subsystem to system


We've now got a combined forecast for each instrument - the weighted sum of trading rule forecasts, multiplied by the FDM. It will look very much like this:

system.combForecast.get_combined_forecast("EUROSTX").plot()
show()

Combined forecast for Eurostoxx. Note the average absolute forecast is around 10. Clearly a choppy year for stocks.


Using chapters 9 and 10 we can now scale this into a subsystem position. A subsystem is my terminology for a system that trades just one instrument. Essentially we pretend we're using our entire capital for just this one thing.


Going pretty quickly through the calculations (since you're eithier familar with them, or you just don't care):

system.positionSize.get_price_volatility("EUROSTX").plot()
show()

Eurostoxx instrument value volatility. A bit less than 1% a day in 2014, a little more exciting recently.

system.positionSize.get_block_value("EUROSTX").plot()
show()


Block value (value of 1% change in price) for Eurostoxx.


system.positionSize.get_instrument_currency_vol("EUROSTX").plot()
show()




Eurostoxx: Instrument currency value: Volatility in euros per day


system.positionSize.get_instrument_value_vol("EUROSTX").plot()
show()







Eurostoxx instrument value volatility: volatility in base currency ($) per day, per contract



system.positionSize.get_volatility_scalar("EUROSTX").plot()
show()




Eurostoxx vol scalar: Number of contracts we'd hold in a subsystem with a forecast of +10




system.positionSize.get_subsystem_position("EUROSTX").plot()
show()

Eurostoxx subsystem position



Instrument weights


We're not actually trading subsystems; instead we're trading a portfolio of them. So we need to split our capital - for this we need instrument weights. Oh yes, it's another optimisation problem, with the assets in our portfolio being subsystems, one per instrument.


import pandas as pd

instrument_codes=system.get_instrument_list()

pandl_subsystems=[system.accounts.pandl_for_subsystem(code, percentage=True)
        for code in instrument_codes]

pandl=pd.concat(pandl_subsystems, axis=1)
pandl.columns=instrument_codes

pandl=pandl.cumsum().plot()
show()

Account curves for instrument subsystems
Most of the issues we face are similar to those for forecast weights (except pooling. You don't have to worry about that anymore). But there are a couple more annoying wrinkles we need to consider.



Missing in action: dealing with incomplete data


As the previous plot illustrates we have a mismatch in available history for different instruments - loads for Eurodollar, Corn, US10; quite a lot for MXP, barely any for Eurostoxx and V2X.

This could also be a problem for forecasts, at least in theory, and the code will deal with it in the same way.

Remember when testing out of sample I usually recalculate weights annually. Thus on the first day of each new 12 month period I face having one or more of these beasts in my portfolio:
  1. Assets which weren't in my fitting period, and aren't used this year
  2. Assets which weren't in my fitting period, but are used this year
  3. Assets which are in some of my fitting period, and are used this year
  4. Assets which are in all of the fitting period, and are used this year
Option 1 is easy - we give them a zero weight.

Option 4 is also easy; we use the data in the fitting period to estimate the relevant statistics.

Option 2 is relatively easy - we give them an "downweighted average" weight. Let me explain. Let's say we have two assets already, each with 50% weight. If we were to add a further asset we'd allocate it an average weight of 33.3%, and split the rest between the existing assets. In practice I want to penalise new assets; so I only give them half their average weight. In this simple example I'd give the new asset half of 33.3%, or 16.66%.

We can turn off this behaviour, which I call cleaning. If we do we'd get zero weights for assets without enough data.


instrument_weight_estimate:
   cleaning: False
 


Option 3 depends on the method we're using. If we're using shrinkage or one period, then as long as there's enough data to exceed minimum periods (default 20 weeks) then we'll have an estimate. If we haven't got enough data, then it will be treated as a missing weight; and we'd use downweighted average weights (if cleaning is on), or give the absent instruments a zero weight (with cleaning off)

For bootstrapping we check to see if the minimum period threshold is met on each bootstrap run. If it isn't then we use average weights when cleaning is on. The less data we have, the closer the weight will be to average. This has a nice Bayesian feel about it, don't you think? With cleaning off, less data will mean weights will be closer to zero. This is like an ultra conservative Bayesian.



If you don't get this joke, there's no point in me trying to explain it (Source: www.lancaster.ac.uk)


Let's plot them


We're now in a position to optimise, and plot the weights:

(By the way because of all the code we need to deal properly with missing weights on each run, this is kind of slow. But you shouldn't be refitting your system that often...)

system.config.instrument_weight_estimate["method"]="bootstrap" ## speed things up
system.config.instrument_weight_estimate["equalise_means"]=False
system.config.instrument_weight_estimate["monte_runs"]=200
system.config.instrument_weight_estimate["bootstrap_length"]=104

system.portfolio.get_instrument_weights().plot()
show()


Optimised instrument weights
These weights are a bit different from equal weights, in particular the better performance of US 10 year and Eurodollar is being rewarded somewhat. If you were uncomfortable with this you could turn equalise means on.


Instrument diversification multiplier


Missing in action, take two


Missing instruments also affects estimates of correlations. You know, the correlations we need to estimate the diversification multiplier. So there's cleaning again:


instrument_correlation_estimate:
    cleaning: True


I replace missing correlation estimates* with the average correlation, but I don't downweight it. If I downweighted the average correlation the diversification multiplier would be biased upwards - i.e. I'd have too much risk on. Bad thing. I could of course use an upweighted average; but I'm already penalising instruments without enough data by giving them lower weights.

* where I need to, i.e. options two and three

Let's plot it



system.portfolio.get_instrument_diversification_multiplier().plot()
show()


Instrument diversification multiplier


And finally...


We can now work out the notional positions - allowing for subsystem positions, weighted by instrument weight, and multiplied by instrument diversification multiplier.


system.portfolio.get_notional_position().plot("EUROSTX")
show()


Final position in Eurostoxx. The actual position will be a rounded version of this.


End of post


No quant post would be complete without an account curve and a Sharpe Ratio.

And an equation. Bugger, I forgot to put an equation in.... but you got a Bayesian cartoon - surely that's enough?
 

print(system.accounts.portfolio().stats())

system.accounts.portfolio().cumsum().plot()

show()



Overall performance. Sharpe ratio is 0.53. Annualised standard deviation is 27.7% (target 25%)

Stats: [[('min', '-0.3685'), ('max', '0.1475'), ('median', '0.0004598'), 
('mean', '0.0005741'), ('std', '0.01732'), ('skew', '-1.564'), 
('ann_daily_mean', '0.147'), ('ann_daily_std', '0.2771'), 
('sharpe', '0.5304'), ('sortino', '0.6241'), ('avg_drawdown', '-0.2445'), ('time_in_drawdown', '0.9626'), ('calmar', '0.2417'), 
('avg_return_to_drawdown', '0.6011'), ('avg_loss', '-0.011'), 
('avg_gain', '0.01102'), ('gaintolossratio', '1.002'), 
('profitfactor', '1.111'), ('hitrate', '0.5258')]

This is a better output than the version with fixed weights and diversification multiplier that I've posted before; mainly because a variable multiplier leads to a more stable volatility profile over time, and thus a higher Sharpe Ratio.


86 comments:

  1. Rob, again thank you for the article.
    I am curious, is there a easy way to feed your system directly from Quandl instead of legacyCSV files?

    ReplyDelete
  2. Getting data from quandl python api is very easy. The hard thing is to produce the two kinds of data - stitched prices (although quandl do have this) and aligned individual contracts for carry. So the hard bit at least for futures trading is writing the piece that takes raw individual contracts and produces these two things.

    This is on my list to do...

    ReplyDelete
  3. I had a few Q's on above:

    OPTIMISATION

    When you optimise to assign weights to rules, what do you do in your OWN system:
    1. i) do you optimise the weights for each trading rule based on each instrument individually, so each trading rule has a different weight depending on the instrument, or ii) do you optimise the weights for trading rules based on pooled data across all instruments?
    2. if the answer above is ii) how do you assign the WEIGHTS TO THE INSTRUMENTS when you pool them in the optimisation to determine the WEIGHTS TO THE TRADING RULES? Are the instrument weights determined in a prior optimisation before assigning weights to trading rules? Is your process to first optimise the weights assigned to each instrument, and after this is done you pool the instruments based on these weights to optimise the for the weights for each trading rule?


    FORECAST SCALARS

    When we calculate average forecast scalars, what do you personally do:
    1. do you calculate the median or arithmetic average?
    2. in order to calculate the average, do you personally pool all the instruments, or do you take the average forecast from each instrument individually?

    Apologies for the caps, could not find any other way to add emphasis.

    ReplyDelete
    Replies
    1. "1. i) do you optimise the weights for each trading rule based on each instrument individually, so each trading rule has a different weight depending on the instrument, or ii) do you optimise the weights for trading rules based on pooled data across all instruments?"

      Number (ii) but in the presence of different cost levels (code not yet written).


      "2. if the answer above is ii) how do you assign the WEIGHTS TO THE INSTRUMENTS when you pool them in the optimisation to determine the WEIGHTS TO THE TRADING RULES? Are the instrument weights determined in a prior optimisation before assigning weights to trading rules? Is your process to first optimise the weights assigned to each instrument, and after this is done you pool the instruments based on these weights to optimise the for the weights for each trading rule?"

      No, if you look at the code it is just stacking all the returns from different instruments. This means they are equally weighted, but actually implicitally higher weights are given to instruments with more data history.

      "FORECAST SCALARS

      When we calculate average forecast scalars, what do you personally do:
      1. do you calculate the median or arithmetic average?"

      median

      "2. in order to calculate the average, do you personally pool all the instruments, or do you take the average forecast from each instrument individually?"

      Pool.

      Rob

      Delete
    2. Hi Rob, Thanks for your answer above. I am unclear as to i) when in the process the instrument weights are calculated and ii) how these are calculated. Are you able to explain this?

      Delete
    3. The instrument weights are calculated when they're needed; after combining forecasts (chapter 8) and position scaling (chapters 9 and 10).

      As to how, it's just portfolio optimisation (of whatever specific kind you prefer; though I use bootstrapping on an expanding out of sample window). The assets in the portfolio are the returns of the trading subsystems, one for each instrument.

      Rob

      Delete
  4. Sorry Rob, I am still trying to wrap my head around this. So to confirm, the instrument weights are determined in a SEPARATE optimisation that is INDEPENDENT from the optimisation of the weights assigned to trading rules? So two separate optimisations?

    ReplyDelete
    Replies
    1. Yes. The forecast weights optimisation has to be done first; then subsequent to that you do one for the instrument weights.

      (of course it's feasible to do it differently if you like.... but I find it easier to do this way and that's what in the book and the code)

      Delete
  5. OK, this is clear in my mind now. Thank you!

    ReplyDelete
  6. Hi Rob,

    Can you perhaps write a blog post about how the Semi Automated trader could develop scaled forecasts? In the book, the examples of CFD bets (not available to those of us in the US) is very helpful, but what if we like the way in which your signals fluctuate from moderately strong to stronger?

    ReplyDelete
    Replies
    1. The instrument you're trading is irrelevant. It's just a matter of translating your gut feel into a number between -20 (strong sell) and +20 (strong buy). I'm not sure that's something I can blog about. Or have I misunderstood the question?

      Delete
  7. Right, that makes sense. Perhaps i'm just not fully understanding. Based on the walk-through examples in the book for the Semi-automatic trader using CFD's, the signals aren't combined or anything fancy like that. Like you said, its just a matter of translating gut feel into an integer.

    I just wanted to know if it were possible for the discretionary trader to develop a weighted combined forecast, similar to the staunch systems trader. One of the most attractive features of your system is the fact that the signal generation is done for you on a routine basis.

    Based on my limited understanding, it seemed like the semi-automatic trader is limited to explicit stop losses and arbitrary binary trading.

    ReplyDelete
    Replies
    1. Oh sure you can combine discretionary forecasts. If you post your email I'll tell you how (not a secret but a bit long for a comment). I moderate posts so I won't publish the one with your email in it.

      Delete
  8. Hi Robert,

    I've a question about forecast weights.

    At first, more theoretical...
    I want to use bootstrapping to determine the forecast weights. I think it's best to calculate separate forecast weights for each element because the cost/instrument can vary substantially per instrument. Also in my opinion it's important to take into account the trading costs for the calculation of the forecast weights, because a fast trading system will generate a lot of trading costs (I work with CFD's) and I think a lower participation in the combined forecast for the faster system will be better.
    Do you agree with this ideas ?

    Now more practical...
    My idea is to calculate a performance curve for each trading rule variation for each instrument and use this performance curves for bootstrapping.

    Is the following method correct :
    1. Daily calculation per instrument en per trading rule variation
    - calculate scaled forecast
    - calculate volatility scaler
    - calculate number of contracts
    - calculate profitloss (including trading costs)
    - create accountcurve

    2. use bootstrapping method per instrument using all the account curves for all used trading rule variations. The result should be the forecast weights per instrument (subsystem)

    Is this the correct way ?

    Thank you
    Kris

    ReplyDelete
    Replies
    1. Yes definitely use trading costs to calculate weights and if costs vary a lot between instruments then do them seperately.

      The method you outline is correct.

      pysystemtrade will of course do all this; set forecast_correlation_estimate["pool_instruments"] to false

      Delete
    2. Hi Robert,

      Thank you for the confirmation.

      Kris

      Delete
  9. I was listening to Perry Kaufman podcast on Better System trader, and he said that true volatility adjustment doesn't work for stocks.

    The argument is that because stock has low leverage and if you trading a stock with low volatility you will need to invest a lot of money to bring that volatility to mach other stock and you may not have enough money to do that. Another option is to reduce to position of the other stocks but then you not using all the money.

    What he suggested is to dividing equal investment by stock price.

    I wonder that your thoughts on this?

    ReplyDelete
    Replies
    1. Generally speaking I think volatility adjustment works for any asset that has reasonably predictable / continously adjusting volatility. Theres nothing bad about stocks, except maybe illiquid penny rubbish, that makes them bad for vol sizing.

      BUT really low volatility is bad in any asset class.

      I discuss the problems of trading anything with really low volatility in my book. Essentially you should avoid doing it. If you haven't got leverage then as Perry says it will consume too much capital. If you have got leverage then you'll open yourself up to a fat tailed event.

      It also leads to higher costs.

      Delete
  10. I have two questions:

    1.) I may have missed somewhere if you mentioned it, but how do you manage hedging currencies? It seems like your trading in pounds, so for instance how do you hedge contracts denominated in AUD?

    2.) What is your margin to equity? This is something I keep hearing about. For instance backtesting a few different strategies and running the margins in CME database shows a margin to equity of about 35% when I am targeting 15% vol. This seems high compared to other managed futures strategies that say about 15% margin to equity and have higher volatility(even while trading more markets than I). Any thoughts would be more than appreciated!!

    ReplyDelete
    Replies
    1. You don't need to hedge futures exposure, just the margin and p&l. My policy is straightforward - to avoid building up excessive balances in any currency.

      My margin:equity is also around 35%, but on 25% volatility. I agree that your margin sounds rather high.

      Delete
    2. Thank you!! Would you mind providing just a simple example of how the currency hedging works?

      Also, I'm trading markets similar to yours and I can't see my margin to equity being correct, would you agree?

      Delete
    3. I buy an S&P future@2000. The notional value of the contract is 200x50 = $100K. I need to post $6K margin. I convert say £4K GBP to do this.

      Scenario a) Suppose that GBPUSD changes such that the rate goes from 1.5 to 2.0. I've lost £1K since my margin is worth only £3K. But I'm not exposed to losses on the full 100K.

      Scenario b) Suppose the future goes to 2200 with the fx rate unchanged. I've made $50 x 200 points = $10,000. I sweep this back home to GBP leaving just the initial margin. I now have $10K in GBP; i.e. £6,666 plus $6K margin.

      Scenario c) Suppose the future goes to 2200. I've made $10,000. I don't sweep and GBPUSD goes to 2.0. I've now got the equivalent of £5,000 in profits and £3,000 in margin. I've lost £1,666 plus the losses on my margin as in scenario (a).

      I agree your margin does sound very high.

      Delete
    4. Ahh I see. Thank you very much, very helpful to me! Your response is greatly appreciated.

      Thank you for your work and love the book!

      Delete
  11. I hope you don't mind questions!

    You say you have a 10% buffer around the current position(i.e if the weight at rebalance is 50% and the target is 45%, you keep it at 50% because it is within 10%). However, what if you have a situation where the position changes from, say, +5% to -4%? This is within the 10% buffer but the signs have changed, what do you do with your position?

    ReplyDelete
  12. Hi Rob,
    If you don't mind me asking, are your log-scale equity curve charts in base 'e' or base 10?
    Thanks

    ReplyDelete
    Replies
    1. Neither. They are cumulated % curves. So "5" implies I've made 500% return on capital if I hadn't done any compounding. A log curve of compounded returns would look the same but have some different scale.

      Delete
  13. Also, from what I have read, it seems your instrument and rule weights are only updated each time a new instrument enters your system, so you hardcode these weights in your own config; however, these weights do incrementally change each day as you apply a smooth to them. How can one set this up in pysystemtrade? I understand how you hardcode the weights in the config, but how do I apply a smooth to them in pysystemtrade? Or is this done automatically if I included e.g., 'instrument_weight_estimate: ewma_span: 125' in the config?

    ReplyDelete
    Replies
    1. At the moment the code doesn't support this. However I think it makes sense to smooth "fixed" weights as instruments drift in and out, so I'll include it in the next release.

      Delete
    2. Now added to the latest release. Parameter is renamed instrument_weight_ewma_span: 125 (and same for forecasts). Will apply to both fixed and estimated weights. Set to 1 to turn off.

      Delete
    3. Hi Rob,
      Can you provide some further details on how to use fixed weights (that I have estimated), yet apply a smooth to them? I've been unable to use 'instrument_weight_ewma_span' to filfill this purpose... Thanks!

      Delete
    4. http://pandas.pydata.org/pandas-docs/version/0.17.0/generated/pandas.ewma.html

      Delete
    5. Hi Rob,
      I’ve not been clear. I understand how EWMA works and the process of smoothing.
      The problem I am having is that I am using weekly bootstrapping to estimate instrument weights. However, each day when I run pysystemtrade the calculated instrument weights can vary significantly day to day due to the nature of bootstrapping. This leads to situations where e.g., pysystemtrade would have generated a trade yesterday when I was running it (which I would have executed), but when I run it today the instrument weight estimates may have changed enough due to the bootstrapping so that the trade that was generated and executed yesterday does not show up as a trade that was generated yesterday today. This makes me less trusting of the backtested performance, as the majority of trades that were historically generated but excluded after resampling are losing trades.
      I only sample the market once a day generally (so that repeated sampling of the market overwriting the current day’s mark is not an issue).
      I would like to use the bootstrapping to estimate the weights ANNUALLY and apply the smooth to adjust between last year’s calculated weight, and today’s. But if I am using fixed weights (after having estimated via bootstrapping) by setting them as fixed in the config, there are no longer two data points to smooth between as I have only one fixed estimate in the config.
      How can I insert an historical weight for last year and a new fixed weight this year (by fixing it in the config) and smooth between them?

      Delete
    6. "I am using weekly bootstrapping to estimate instrument weights...." I think this is a little... well if I'm being honest I think its insane.

      Okay so to answer the question for backtesting I use one config, and then for my live trading system I use another config which has fixed weights. Personally I run these seperately for different reasons, the backtest to get an idea of historical performance, the 'as live' backtest with fixed weights to compare against what I'm currently doing and for actual trading.

      There is no configurable way of mixing these, so you'd need to write some code that takes the estimate bootstrapped weights and then replaces them with fixed weights after a certain date.

      Delete
    7. Thanks for the reply. I had applied the same method for the instrument weights as for the forecast weights. You'd mentioned above:
      "Also the default is to use weekly returns for optimisation. This has two advantages; firstly it's faster. Secondly correlations of daily returns tend to be unrealistically low (because for example of different market closes when working across instruments)."
      Why would the default for forecast weights be weekly but not for instrument weights?
      Thanks!

      Delete
    8. Oh sorry I misunderstood. You are using WEEKLY RETURNS to estimate instrument weights: that's fine. I thought you were actually redoing the bootstrapping every week.

      Delete
  14. Hi Rob,
    If I wanted to apply a trading rule to one instrument, say ewmac8_32 just to Corn, and another trading rule to another instrument, say ewmac32_128 just to US10, and combine them into a portfolio so that I could get the account statistics, how could I do that? The typical method of creating systems obviously applies each trading rule to each market.

    I suspect that this would have to be done at the TradingRule stage such that a TradingRule (consisting of the rule, data, and other_args) would be constructed for the 2 cases above. However, I'm having trouble passing the correct "list" of data to the TradingRule object. And, if that is possible, what would need to be passed in for the "data" at the System level i.e. my_system=System([my_rules], data)? I suspect that if all this is possible, it could also be done with a YAML file correct? Thank you so much for any advice and pointing me in the right direction!

    ReplyDelete
    Replies
    1. Easy. The trading rule object should contain all rules you plan to use.

      If using fixed weights:

      YAML:
      forecast_weights:
      CORN:
      ewmac8_32: 1.00
      US10:
      ewmac32_128: 1.00

      Python:
      config.forecast_weights=dict(CORN=dict(ewmac8_32=1.0), US10=dict(ewmac32_128=1.0))


      If using estimated weights:

      YAML:
      rule_variations:
      CORN:
      - "ewmac8_32"
      US10:
      - "ewmac32_128"

      Python:
      config.forecast_weights=dict(CORN=["ewmac8_32"=1.0], US10=["ewmac32_128"=1.0])

      (In this trivial example you wouldn't need to estimate, but you could specify multiple rules of different kinds to do so)

      Delete
  15. Thank you, will test this out!

    ReplyDelete
  16. Hi Rob,
    Thank you for an excellent book. I am trying to rewrite some parts of your system in a different language (broker doesn't support python) and add live trading. However I got a bit stuck while I was trying to reproduce the calculations of volatility scalar. For some reason when I request system.positionSize.get_volatility_scalar("CORN") I receive just a series of NaNs, but the subsystem position is somehow calculated. Don't really understand why is that happening

    ReplyDelete
    Replies
    1. Can you please raise this is an issue on github and include all your code so I can reproduce the problem

      https://github.com/robcarver17/pysystemtrade/issues/new

      Delete
    2. Rob,

      I guess I solved the problem. The issue was the DEFAULT_DATES was set up to December 2015, while data in legacycsv was up to May 2016. So the fx_rate USD-USD wasn't defined after December 2015 causing all the problems.

      Thank you for the fast response, I'm still getting familiar with GitHub.

      Delete
  17. Hi Rob, I tried to reproduce forecast weight estimation with pooling, and bootstrap, using this code

    from matplotlib.pyplot import show, title
    from systems.provided.futures_chapter15.estimatedsystem import futures_system

    system=futures_system()
    system.config.forecast_weight_estimate["pool_instruments"]=True
    system.config.forecast_weight_estimate["method"]="bootstrap"
    system.config.forecast_weight_estimate["equalise_means"]=False
    system.config.forecast_weight_estimate["monte_runs"]=200
    system.config.forecast_weight_estimate["bootstrap_length"]=104


    system=futures_system(config=system.config)

    system.combForecast.get_raw_forecast_weights("CORN").plot()
    title("CORN")
    show()

    The output came out different than your results,

    https://dl.dropboxusercontent.com/u/5114340/tmp/weights.png
    https://dl.dropboxusercontent.com/u/5114340/tmp/weights.log

    Did I have to configure somethings through YAML as well as Python code? It seemed like the code above was enough.

    Thanks,

    ReplyDelete
    Replies
    1. Maybe you have changed the config, because when I ran the lines you suggested I got the right answer (subtly different perhaps because of randomised bootstrapping, and because I've introduced costs). Try refreshing to the latest version and make sure there are no conflicts.

      Delete
  18. One coding question for the correlation matrix - Chapter 15 example system. With this code,

    http://goo.gl/2caO1K

    I get 0.89 for E$-US10 correlation, Table 46 in Systematic Trading says 0.35. I understand ST table combines existing numbers for that number, but the difference seems too big. Maybe I did something wrong in the code? I take PNL results for each instrument, and feed it all to CorrelationEstimator.

    Thanks,

    ReplyDelete
    Replies
    1. Another code snippet, this one is more by the book,

      goo.gl/txN63u

      I only left two instruments, EDOLLAR and US10, included two EWMACs and one carry, with equal weights on each instrument. I get 0.87 for correlation.

      Delete
    2. Yes 0.35 is for an average across all asset classes. It's arguable wehter STIR and bonds are different asset classes; which is why I grouped them together in the handcrafted example in chapter 15. Clearly you'd expect the US 10 year rate and Eurodollar futures to be closely correlated.

      Delete
    3. Thanks! Yes I had the feeling hese two instruments were closely correlated, just was not sure if my calculation was off somehow. Great. And since, according to ST, E$ and US10 are from different geographies that is a form of diversification, and the Ch 15 portfolio has positive SR, so we're fine.

      Delete
  19. Dear Rob,

    where can I find information on how to calculate account curves for trading rule variations from raw forecasts?

    Do I assume I use my whole trading capital for my cash volatility target to calculate position size and then return, or should i pick certin % volatility target assuming ("guessing") in advanve a certain sharp ratio i'm planing to achieve on my portfolio?

    Thanks,
    Peter

    ReplyDelete
    Replies
    1. The code assumes we use some abstract notional capital and volatility target (you can change these defaults). Or if you use weighted curves https://github.com/robcarver17/pysystemtrade/blob/master/docs/userguide.md#weighted-and-unweighted-account-curve-groups it will give you the p&l as a proportion of your total capital.

      Delete
    2. Hi Rob, I am getting tangled up in how the weighted curve groups work, specifically accounts.pandl_for_all_trading_rules. Been over the user guide several times and still don't get it, so some questions:-
      - when I look at accounts.portfolio().to_frame and get the individual instrument component curves, they all sum up nicely to accounts.portfolio().curve()
      - when I look at accounts.pandl_for_all_trading_rules().to_frame() the individual rule curves look like they are giving a percentage (in the chart 15 config, curve rises from 0 -> 400 ish
      - I am guessing this is a percentage of notional capital, so I am dividing this by 100 and multiplying by notional
      - however I cannot get even close to accounts.portfolio.curve()
      - the shape looks very similar, the numbers differ from the portfolio curve by a suspiciously stable factor of 1.38
      - you point out in your user guide that "it will look close to but not exactly like a portfolio account curve because of the non linear effects of combined forecast capping, and position buffering or inertia, and rounding"
      - however I still cannot get them close even when I configure buffering to 0 and capping to high (1000)
      - clarifying questions:-
      - is the output of pandl_for_all_trading_rules().curve() in fact the percentage of notional capital or do I have that wrong?
      - when you say (user guide, panel_for_trading_rule) "The total account curve will have the same target risk as the entire system. The individual curves within it are for each instrument, weighted by their contribution to risk." what exactly do you mean by contribution to risk? Are we now talking about a percentage of the systems target volatility? (20% or 50K in this configuration)
      I appreciate any insights you can give here.

      Delete
    3. Can you send me a private mail?

      Delete
  20. Hi Rob,

    I'm searching for the historical data on the websites you mentioned in the book. I'm looking to the six instruments you also use in this post. On Quandl I can find continuous contracts but this use rollover method at contract expiry and there is no price adjustment. I'm wondering if this is good enough to backtesting because the effective rolling is total different then the (free) data from Quandl. Also with the premium subscription there are a limited methods for rolling. For example : if we roll corn futures in the summer and working only on december contracts, I think this is not possible with quandl (and I think also other data providers like CSIData.com). I'm thinking to write my own rolling methods myself. Is this a good idea and is it necessary to do this (=time consuming). How do you handle this problem ?

    Kris

    ReplyDelete
    Replies
    1. I wrote my own rollover code. Soon I'll publish it on pysystemtrade. In the meantime you can also get my adjusted data: https://github.com/robcarver17/pysystemtrade/tree/master/sysdata/legacycsv

      Delete
    2. Thank you so much for the link Rob! Very usefull for me. I can use this data to do my own calculations with my own program (written in VB.NET)
      What do you do with the gaps: fill it by the previous day values to have a value for each day so all dataseries are in sync ? Or skip the line with the result that the dataseries are not in sync ?

      Delete
    3. If I'm calculating rolldown which is PRICE - CARRY I first work it out for each day, so I'll have occasional Nans. I then forward fill the values, although not too early as I use the value of the forecast to work out standard deviations for scaling purposes, and premature forward filling will reduce the standard deviation.

      Delete
    4. OK, that's also the way I do. Calculate the forecasts on the raw data (so with gaps). Afterwards fill it to bring all instruments in sync so it's much easier to calculate PL.

      An other question about the legacycsv files from github : when I look for example V2X, I see the latest prices not exactly match with the individual contracts from Quandl. Am I missing something ?

      For example :
      file V2X_price.csv at 2016-07-01 : 25,4
      file V2X_carrydata.csv at 2016-07-01 : 25,4 and contract expiry is 201608
      (this 2 files matches so that's OK and I know you get the values from the august contract)
      If I go to the Quandl website and take this particular contract (https://www.quandl.com/data/EUREX/FVSQ2016-VSTOXX-Futures-August-2016-FVSQ2016) then I see the settlement for 2016-07-01 value 25.7

      Also checked this for Corn and this has also a small deviation. I suppose you use backwards panama ?

      What's the reason for this small deviations ?

      Delete
    5. The data from about 2.5 years ago isn't from quandl, but from interactive brokers.

      Delete
  21. This comment has been removed by the author.

    ReplyDelete
  22. Hi, Rob! I'm struggling with forecast correlation estimates used for fdm calculation, could you plz explain what is ew_lookback parameter and how exactly you calculate ewma correlations?



    E.g. With pooled weekly returns i use first ew_lookback = 250 data points to calculate ewma correlations, then expand my window to 500 data points and calculate correlations on this new set using 500 ewma e.t.c? Why use 250 and not t 52 if use weekly returns?

    Thank you!

    ReplyDelete
    Replies
    1. These are the defaults: frequency: "W"
      date_method: "expanding"
      ew_lookback: 500

      An expanding window means all data will be used.

      Yes the ew_lookback of 500 implies a half life of ~10 years on the exponential weighting. If you think that is too long then of course reduce it. Personally I don't see why correlations should change that much and I'd rather have a longer estimate.

      Delete
    2. So ew_lookback just specifies my decay factor which i then use for all the data points?

      How do i go about pooling? e.g. I have asset1 with history from 2010 to 2016 (10 trading rules and variations returns) and asset2 from 2008 to 2016 (10 trading rules and variations returns), do i just stack forecast returns to get total of 14 years of data and calculate correlations 10 x 10 on all of the data or what?

      I'm confused

      Delete
    3. Yes pooled returns are stacked returns https://github.com/robcarver17/pysystemtrade/blob/master/syscore/pdutils.py df_from_list is the critical function.

      Delete
  23. Hello, after looking through the python code, I wonder how you came up with the adj_factor for costs when estimating forecast weights? via simulation? THANKS!

    ReplyDelete
    Replies
    1. Please tell me which file you are looking at and the line number please.

      Delete
    2. syscore/optimisation line 322
      # factors .First element of tuple is SR difference, second is adjustment
      adj_factors = ([-.5, -.4, -.3, -25, -.2, -.15, -.1, -0.05, 0.0, .05, .1, .15, .2, .25, .3, .4, .5],
      [.32, .42, .55, .6, .66, .77, .85, .94, 1.0, 1.11, 1.19, 1.3, 1.37, 1.48, 1.56, 1.72, 1.83])


      def apply_cost_weighting(raw_weight_df, ann_SR_costs):
      """
      Apply cost weighting to the raw optimisation results
      """

      # Work out average costs, in annualised sharpe ratio terms
      # In sample for vol estimation, but shouldn't matter much since target vol
      # should be the same

      avg_cost = np.mean(ann_SR_costs)
      relative_SR_costs = [cost - avg_cost for cost in ann_SR_costs]

      # Find adjustment factors
      weight_adj = list(
      np.interp(
      relative_SR_costs,
      adj_factors[0],
      adj_factors[1]))
      weight_adj = np.array([list(weight_adj)] * len(raw_weight_df.index))
      weight_adj = pd.DataFrame(
      weight_adj,
      index=raw_weight_df.index,
      columns=raw_weight_df.columns)

      return raw_weight_df * weight_adj

      Delete
  24. Hello Rob,
    Would you consider making the ewma_span period for smoothing your forecast weights a variable instead of fixed value, perhaps by some additional logic to detect different volatility 'regimes' that are seen in the market? Or maybe such a notion is fair, but this is the wrong place to apply it, and should be applied at the individual instrument level or in strategy scripts?

    ReplyDelete
    Replies
    1. No this smacks of overfitting. Put such evil thoughts out of your head. The point of the smooth is to reduce turnover on the first of january each year, not to make money.

      Delete
    2. (goes to the blackboard to write "I will not overfit" 50 times)...sorry, I've read your statements on overfitting more than once, but had a lapse in memory when this question popped into my thick skull. Thanks for your response.

      Delete
  25. Hi Robert,

    For the diversification multiplier you mention to use exponential weighting. Where or how you implement this? On the returns or on the deviations of the returns from the expected returns (so just before the calculation of the covariances)? Or maybe at an other place ?

    Can you give me some direction?

    Thanks

    Kris

    ReplyDelete
    Replies
    1. No, on the actual multiplier. It's calculated from correlations, updated annually. Without a smooth it would be jumpy on the 1st January each year.

      Delete
    2. OK,but in this article I found 2 different parameters refering to exponantional weighting :

      - under 'Forecast Diversification Multiplier' --> 'correlation' : I found "using_exponent: True # use an exponentially weighted correlation, or all the values equally"

      - under 'Forecast Diversification Multiplier' --> 'Smoothing again' : I found "ewma_span: 125 ## smooth to apply"

      I am a little bit confused about the 2 parameters. I understand that the second parameter (smoothing again) is to smooth the jump on the 1st January each year.

      But what about the first parameter (correlation) ? I thought that you use some kind of exponantial weighting for calculating the correlations, but maybe I'm wrong ? Sorry, but it is not so clear for me.

      Kris

      Delete
    3. Sorry, yes I use exponential weighting a lot. With respect to the first, yes I calculate correlations using an exponential estimator: http://pandas.pydata.org/pandas-docs/version/0.17.0/generated/pandas.ewmcorr.html

      Delete
    4. Thanks for this tip!

      I always try to write my own code (don't like dependency of others code) and also I don't see how I can use the pandas libraries into vb.net.

      But I've found the functions here :
      https://github.com/pandas-dev/pandas/blob/master/pandas/window.pyx --> EWMCOV

      and here :
      https://github.com/pandas-dev/pandas/blob/v0.19.2/pandas/core/window.py#L1576-L1596 --> corr
      So I can analyse how they to the stuff and can write it in VB.NET

      Kris

      Delete
    5. I see that the ewm.corr-function return a list of correlations for each date and not a correlation matrix.
      For the classic corr-function the result a matrix of correlation coëfficients.

      In your code (https://github.com/robcarver17/pysystemtrade/blob/ba7fe7782837b0df0dea83631da19d98a1d8c84f/syscore/correlations.py#L173) I see you only takes the latest value for each year of the ewm.corr function.
      I should expect that we must take a kind of average of all correlation values from a pair to calculate the correlation coëfficient for each pair. Can you clarify this, thanks.

      Kris

      Delete
    6. ewm.corr returns rolling correlations; each element in the list is already an exponentially weighted average of correlations. Since I'm doing the rolling through time process myself I only need the last of these elements.

      Delete
    7. OK, but in your simulations you work with an expanding window and do calculations yearly based on weekly data. If we use EWM-span of 125 it means the rolling correlations go back roughly about 3 years (125*5 days). So if for example the total period is from 1990-2016, is the last element of last calculation (1990-2016) then a correct estimate of the correlation of the whole period, because data before 2012 is 'ignored' ?

      Maybe it's then faster to work with a rolling out-of-sample frame to do this calculations ?

      Or is my idea on this not correct ?

      Kris

      Delete
    8. Well 92% of the weight on the correlations will be coming from the last 3 years. So yes you could speed this up by using a rolling out of sample although the results will be slightly different. 5 years would be better as this gets you up to 99%.

      Delete
  26. Rob, in your legacy.csv modules, some specific futures have the "price contract" as the "front month"(closest contract) like Bund, US20 & US10, etc. meanwhile, others such as Wheat, , gas, crude, etc have the "carry contract" as the front month. is this by design?

    ReplyDelete
    Replies
    1. Yes. You should use a nearer month for carry if you can, and trade further out, but this isn't possible in bonds, equities or FX. See appendix B.

      Delete
  27. Hi Rob,

    Thank you so much for your book. It it very educative. I was trying to understand more about trading rules correlations in "Chapter 8: Combined Forecasts". You mentioned that back-testing the performance of trading rules to get correlation.

    Could you share a bit more insights on how you get the performance of trading rules, please?
    (1) Do you set buy/sell threshold at +/- 10? meaning that no position held when signal is [-10,10], only 1 position held when signal is [10,20] and [-20,-10] and 2 positions held when signal is at -20/+20?
    (2) Trading cost is considered? (I think the answer is yes.)
    (3) You entry a buy trade, say at signal=10. When do you signal to exit the trade? when signal<10 or signal=0?

    or you use dynamic positions, meaning the position varies with signal all the time.

    Another question regarding optimisation:
    In the formula: f*w - lemada*w*sigma*w' to estimate weights
    (1) f is rules' sharpe ratio calculated using the rules' historical performance pooled from all instruments or just the sharpe of the rule from the instrument we look at?
    (2) how do you define lemada? =0.0001? if so, is it always 0.0001?

    Sorry if those two questions had been asked before.

    Thanks,
    Deano

    ReplyDelete
    Replies
    1. To get the performance of a trading rule you run through the position sizing method in the book allocating 100% to a given trading rule.

      1) No, that isn't how the system works at all. Read the rest of the book before asking any more questions.
      2) yes - again this in discussed later in the book
      3) No, I use continous positions. You need to read chapter 7 again as you don't seem to have quite got the gist.

      f*w - lemada*w*sigma*w

      I don't think I've ever used this formula in my book, or on my blog, so I can't really explain it.

      Delete