My first book: "Systematic Trading"

I am the author of "Systematic Trading", which is published by Harriman House in 2015.

(see here for information about "Smart Portfolios" my second book).

For more information:
To buy I'd prefer it if you went to the publishers page:

There is also a Japanese edition, available here.

I'd prefer it if you didn't buy the book on Amazon. Get it from the publishers. Is this a moral stand on their tax dodging, employee exploiting business? It can be if you like. Though coincidentally I also get a larger royalty if you buy direct.

(Naturally I'd rather you bought the book on Amazon than not at all. Of course if you do buy the book from Amazon, then please review it. Be nice. )



Perry Kaufman

"A remarkable look inside systematic trading never seen before, spanning the range from small to institutional traders. This isn't only for algorithmic traders, it's valuable for anyone needing a structure - which is all of us. Carver explains how to properly test, apply constant risk, size positions and portfolios, and my favorite, his "no rule" trading rule, all explained with scenarios. Reading this will benefit all traders." - Perry Kaufman, author of Trading Systems and Methods, 5th Edition (Wiley, 2013)

Brenda Jubin (Reading the markets)

"The days of Richard Dennis and his “turtles” with their alleged 100% per year profit are long gone, but their mystique lives on...

Robert Carver is more modest—and more realistic. At the same time he has more to offer the investor or trader who has a spark of creativity and intellectual curiosity. Systematic Trading: A Unique New Method for Designing Trading and Investing Systems (Harriman House, 2015) is a thoughtful, and thought-provoking, journey through the process of creating modular rule-based portfolios.

... (Carver) isn’t just some ordinary Joe with a computer and a bunch of back-testing software. He has clearly thought about what makes a good systematic trader and a good systematically-driven portfolio. We can be grateful that he decided to share his insights with us. " Reading the markets (longer review - read more here)


Steve Le Compte (CXO Advisor)

"In summary, investors will likely find Systematic Trading a rational and practical approach to building diversified, risk-managed investment/trading portfolios. The book offers quantified examples throughout." CXO Advisor (longer review - read more here)

Amazon reviews 9 reviews 4.9/5 (read the full reviews here) 5 reviews 4.7/5 (read full reviews here)


  1. Hello, Thank you for your excellent blog and book. I was reading your book and you mention backtesting on randomly generated data that contains different lengths of trends. How do you generate that data? Can you give an example of how you would use it?

    1. asy. Start with a saw tooth wave form with some amplitude. The period of the wave form you should set for double the length of trend you want. Then difference the wave form to get returns. To each daily return add some gaussian noise with mean zero, and some volatility. The ratio of the volatility to the amplitude of the wave is inversely proportional to the signal:noise ratio. A low ratio means you have a bit of noise, and lots of clear trends, and vice versa.Then you cumulate your returns to get a price series.

      Generate a bunch of these, varying period length and signal:noise ratio. Then run your trend following rules over them (monte carlo, lots of runs). You can then discover what the optimal trend length (in this stylised world) is for a given trend following speed.

      You can also look at the relationship between signal:noise and profit (hint: less noise = less profit!) although that's less interesting.

      PS if you liked my book please put a positive review on Amazon, I'd really appreciate.

    2. HI Rob, I appreciate the reply. Sorry for the double post. I thought I posted the original in the wrong section as this refers to the book.

      I have been using a different method to generate random data, first was using a Ornstein–Uhlenbeck process which is more similar to your method. However I was wondering about your thoughts of using the returns of the actual instrument(s) you are looking to trade and then boot strapping those with replacement. That way you are selecting returns from the sample distribution you are actually going to trade in the end. Although it does break down any auto correlation relationships. So sometimes I chunk them into 5 or 10 piece periods, which may keep some of that autocorrelation. What do you think of this method?

      I like your more stylized method as it gives you absolute control over every parameter of the data from error to trend amplitude. I will definitely give it a go

    3. You are right that you need to block bootstrap to keep the autocorrelation. The blocks need to be long enough related to the length of trend you are looking to analyse (perhaps 3 x longer). This means blocks of several weeks or months, or even a couple of years.

      It depends on what you are trying to do. The artifical data is good for getting a feel for what effect your indicator picks up on. You can use it to calibrate, to make sure that if you want to pick up 1 month trends that it is doing that. But it won't tell you if that effect exists in real life.

      Using a block bootstrap for fitting is better than many other fitting methods (as long as you do it expanding window of course) [after all this is how I allocate portfolio weights]. So you'd fit the parameter you wanted in many different random draws, and taken an average of the parameter values.

      But again I'm not a huge fan of fitting to real data.

    4. Thanks again for the answer Rob. I am working out on how to generate and using the saw tooth method. I am using a modulo function % to generate the saw tooth wave but the return series that comes out of it is very spikey and then flat with plateaus. I am not sure if this is the type of series I would use to then fit the data. I was thinking of using a triangular wave that had up and down movements as opposed to just the single up move of a saw tooth wave. Sorry for all the pestering, but i am very curious on the details of your methodology.

      Is it possible for you to give an example or blog post showing how to generate the data and then fitting a sample rule to it? I appreciate your responses. Thank you.

    5. Sorry I wrote saw tooth, but what I actually meant was triangular (

      Here's a quick example with the noise added; trend length 10 units

      By the way you'd expect to see a return series (for the trend following rule, once passed over the randomly generated price series) which is flat with spikes; that's kind of what positive skew looks like.

    6. I will be toying with this excel thank you for the clear example. Would use this data to fit any rule (carry & trend following)? You mention 5 styles of strategies. Do you change the random data to fit them on? How does the 5 different styles affect the result, I am not clear what the 5 styles are.

      On that same note, would this data be viable for mean reversion strategies?

      I imagine if you are doing some carry or any type of multi legged trade, this type of data may not be viable? Although you could simulate 2 price series and look to have them be highly correlated then take the spread of that. Or i suppose just assume the spread is random and then just use it as is.

      As always, I appreciate your answers. Thank you.

    7. Derek

      I have 8 trading rules which I style bucket into 'breakout', 'momentum', 'carry', 'mean reversion' and 'long only' (the 'no rule' rule).

      To be clear we're not really 'fitting' when we use this kind of fake data. We're seeing what type of rule variation will do best, or worst, given a particular stylised trend length. We can also find out what correlations are likely to be, and have a good idea of trading costs.

      We don't make a judgement about what length trends will appear in the future, or whether our stylised trend data is realistic or not (eg is the signal:noise ratio about right). Thus we get no information about pre-cost expected returns, but then it's very difficult to have statistically significant information about these in a real back test.

      For any kind of rule trying to pick up trends this makes sense so 'momentum' and 'breakout'. For other rules, perhaps less so.

      It wouldn't make any sense to use it for carry, because as you say you need a multi legged price series. I am struggling to think of a simple way to use fake data to meaningfully calibrate a carry model. The same applies to 'long only' (not that there is any calibration to do there).

      For mean reversion (in an absolute time series sense, rather than between two instruments) it would make sense - by construction these fake price series show trends at one frequency and mean reversion at another, slower, frequency (which might not be what you want, but you can easily add together triangle waveforms of different frequency and amplitude to get something more realistic, like fast mean reversion and slower trends).

      For mean reversion between two (three, four...) instruments you could generate two (3,4, ...)correlated (and/or cointegrated) return streams plus noise.

      Hope this makes sense

    8. Hi Rob,

      I have been working with your data generator and have finally gotten one that can generate as much data of many different types as I want in R. Which is great.

      But I referred to your book and am still confused as how I am supposed to actually determine how exactly to select the parameter sets of a system based on this data.

      Lets say I am using the same EWMA system and I have generated a set of 100 random time series using different trend period and noise std dev measurements.

      I then take my EWMA system and then run through all possible parameter sets for the lengths say short EWMA from 1 to 100 and long EWMA as some ratio of 4 x short EWMA.

      Then after running the series of 100 back tests (for the 100 parameter sets of short to long EWMA) on this single block of time series. I look at which parameters generate the least correlated returns to each other. This would mean that they are taking different types of entries/exits in regards to each other or at least are attempting to capture different trend lengths.

      So now I repeat the above steps say 100 more times given a different monte carlo generate time series set. Then continue to run the same procedure as above.

      I have then averaged the correlations between each 100 runs over each 100 parameters. Then I just have picked the 5 most negative correlation parameter sets.

      I hope that makes sense. I think I am close to what you describe in your book and what we have discussed earlier. But once I implemented it I was not as confident.

      Once you get to the parameter sweeping and monte carlo of additional time series how are you selecting?

      Are you making 1 type of series at a time say a 30 day period .5 noise std dev and then running the parameter sweep on that. Which would show which parameter set would best be suited to picking up that specific type of trend / noise ratio?

      At this point I can generate so much data I want to narrow down my process to a more concrete set of steps. Then create some easily understood results or method to determine the optimal 5 parameter sets.

      Thank you for your time.


    9. In Part 4 of your book the staunch system trader practice Ch 15 and in Ch 8. I am still very confused to the part of selecting Forecast Weights using the bootstrap method.

      I have fitted EWMA method using random data bootstrapped and fitted them using the MarkoSolver/optimize over periods method.

      When I apply the bootstrap method using optimize_over_periods. Am i supplying that function wtih the actual returns given by those EWMA rules backtested on real data?

      Could you please elaborate on the process of bootstrapping to get the forecast weights. Thank you Its been stumping me for weeks now.


    10. When I apply the bootstrap method using optimize_over_periods. Am i supplying that function wtih the actual returns given by those EWMA rules backtested on real data?

      Yes exactly. The elements of your portfolio are the different rules, the returns are the returns given by running those rules on different instruments.

    11. I see, so for the forecast weights back test and get the return series of each of the parameter sets for the EWMA rules. Then apply that to the optimise_over_period function to get your portfolio weights. which are really your forecast weights.

      I was actually doing this with forecast values themselves and coming out with results similar to equal weight are very near something like that. I am guessing that's because they are just basically random values from -20 to 20. But the weights were "viable" or at least looked ok so i was a bit confused.

      Really appreciate the fast answer.

  2. Hi Rob,
    I just finished your book, really enjoyed it, thanks for the effort you've put in!
    Although, one question is really bothering me. The idea of Volatility Targeting runs all through the book, and it's a sensible idea (if I happened to understand it correctly :) ): to balance everything based on it's riskiness (instrument weights in a portfolio, in case of using SRs, position-capital allocation when deciding position sizes.. )., And it kind of plays nicely with trend-following\asset-allocation systems, where you're betting on a continuation of the same-direction price-movement, so more volatility in this one-directional movement means worse performance.
    But can this approach be also used with mean-reverting StatArb systems? You mentioned that you have a relative value component in your system, but frankly do not understand how would you apply some of these principle to a mean-reverting spread. For example, one of the implications of Volatility Targeting is that when your instrument becomes more volatile you cut down it's positions' capital and wise-versa. But a more volatile mean-reverting instrument is actually a good thing (more volatility - higher profit), so it does not really make sense to reduce capital allocation in this case or does it?

    Thank you in advance.

    1. This comment has been removed by the author.

    2. Dmitry

      Yes you can use vol targeting with relative value systems, and I have done so.

      The 'instrument' will be something like a portfolio of say apple and google eg 1*AAPL - B*GOOG

      The 'price' will be the same, and the 'volatility' easy to calculate.

      Suppose for simplicity the mean, equilibrium, value of the price is zero.

      Now suppose the price becomes positive. We want to put a position on. What risk does this have? Don't forget volatility is a symmetric measure. It thinks there is an equal chance of the price returning to zero, or the price moving higher. The forecast is assymetric, and says you have a higher chance of the price going to zero. Together these give you the correctly sized position.

      Imagine a spread with low volatility, but which moved (smoothly) to large deviations from zero. You should have a massive position on since your risk is low, but there is a long way for the price to move back and hence lots of profit to be made. Note that whilst the price has been deviating the system would be making losing trades (repeatedly catching the knife).

      A more 'stat arb' price series with small frequent deviations and high volatility would have smaller positions on. However if the thing is close to perfectly mean reverting it would make profits on almost every trade.

      Note that the former system will have a lower sharpe ratio than the latter, but the latter will have smaller positions on. That doesn't mean the position scaling is wrong; just that the latter instrument is inherently more predictable and has more 'juice' in it.

      Note this is all about position scaling (chapter 10). Capital allocation is an entirely separate issue (chapter 11 for instruments, chapter 8 for trading rule variations). It's important not to confound these two subjects.

      A portfolio optimisation that used backtest results (like a bootstrap) would probably put less money in the smoother, slower, mean reversion than in the fast system. Just beware that scaling up the positions in the latter will be heavily leveraging a negative skew trade.


    3. Thanks a lot for the detailed answer!
      So at the portfolio optimization step (with bootstrap) the slow spread will get smaller weight(less capital to trade in general) because of it's lower SR, the fast\volatile one will get more capital because it's Sharpe is higher., But at the time of an actual position entry (assuming each instrument has only one similar rule for simplicity), the slow spread will get more capital than the fast one, because it's less volatile... Still the second part does not sit fully-well with me, because we're depriving of capital the "juicier instrument".. Like for example, assume the first spread deviated up from 0 to +2$ in 2 days, and the second spread also deviated from 0 to +2$ but did it in one day., so the second one will become more volatile and get less capital to enter the short-position than the first one. Are you saying that it's actually reasonable because that higher volatility of the second one "predicts" higher possibility of the price continuing to go against us (further up) because it’s a symmetric measure, when in case of the first one (because it's current volatility is lower) there's a lower chance that it will continue to go up, so it's less risky? (sorry if I am not making sense :) ).

      Another not directly-related question: when I was reading the parts about Staunch trader, I could not completely comprehend the following: after all the multipliers, weights and standardizations applied, how fully\effectively the system will be using it's total trading capital? I think we do expect that at different times some instruments(subsystems) will be encroaching on the capital "pre-assigned" to other instruments by the initial portfolio optimization, correct? If that's true, then will the system ever be "starving" because some instruments hogged all available capital and left nothing for the others, or the whole system of correlations, weights and checks will always balance itself so that "everyone will get something" ? Maybe another way to put this question is what's the normal expected percentage of the capital that's "IN" (considering your leverage is quite limited).

    4. Dmitry,
      "Are you saying that it's actually reasonable because that higher volatility of the second one "predicts" higher possibility of the price continuing to go against us (further up) because it’s a symmetric measure, when in case of the first one (because it's current volatility is lower) there's a lower chance that it will continue to go up, so it's less risky? (sorry if I am not making sense :) )."

      Yes that is exactly what I am saying. Even if you have an amazing trading rule on a day to day basis you are exposed mostly to symmetric risk. So for example suppose you had a trading rule with a sharpe ratio of 2.0 (which as you know I personally wouldn't believe). On a day to day basis there is only a 56% chance you will make a positive return, 44% you will lose money. For a Sharpe ratio of 1.0 it's only a 52% chance of being positive. So it's appropriate to use a symmetric risk measure even if you think that your forecast is amazing.

      "how fully\effectively the system will be using it's total trading capital? I think we do expect that at different times some instruments(subsystems) will be encroaching on the capital "pre-assigned" to other instruments by the initial portfolio optimization, correct? If that's true, then will the system ever be "starving" because some instruments hogged all available capital and left nothing for the others, or the whole system of correlations, weights and checks will always balance itself so that "everyone will get something" ? Maybe another way to put this question is what's the normal expected percentage of the capital that's "IN" (considering your leverage is quite limited)."

      Well we're using derivatives in that part so it's more appropriate to think about whether we'll be starved of margin rather than capital. On average in my own futures system (which runs at 25% annuallised volatility target) I use about 20% of my capital in margin. So it isn't a problem even if you use the maximum recommended target, and are running at the maximum possible forecasts.

      It's perhaps better to think about this problem in a 'cash' portfolio like that of the asset allocating investor. In that section I show how to calculate the maximum volatility target given the volatility of the underlying instruments, assuming the portfolio is 90% invested (to allow some room to increase positions in instruments if their volatility falls).

      If we ignore the volatility of the instruments then the key input into this is the instrument diversification multiplier. If that is very high then your realisable volatility be lower. That is the check and balance effect at an instrument level.

      (note another reason not to use low vol instruments - they consume too much capital)

      The asset allocating investor example assumes a fixed forecast of 10. However if you use dynamic trading rules with a 'cash' system you can't do that. The most conservative thing would be to do the same calculations for maximum possible volatility using the maximum forecast of 20. That would mean on average you'd be using only 45% of your capital. But there would never be a 'starvation' problem.

      In practice it's unlikely that all your instruments will hit a forecast of 20 at the same time. You could do the same calculation with a forecast of 18, or perhaps check in the backtest to see what the maximum total forecast was.

      If you then subsequently get an exceptionally high average forecast across your portfolio then you would be close to running out of capital. But that should be rare.

    5. Rob, thanks for your answers, (it's so cool to talk to a real book author :) )
      So for example if our situation is somewhat in the middle (between futures and static allocation):
      For a dynamic system, we have let's say 100 000 of cash capital, and the broker is allowing us to borrow another 100 000. Should we start our calculations of annualized cash volatility target and other values using 90% of the total leveraged amount (2 x 200k x 0.9=180k) or it's not the best way to do it? In general, our goal here is several-fold: we do not want our system to starve, but we also want the capital to work as much as possible, i.e. to have, say, 70% (?) of the leveraged amount(200k*0.7=140k) invested on average, as well as we do not want to get margin-calls when too many trades go against us at the same time (from that we should be guarding in real-time..). It's probably a slightly different\bigger problem, but maybe you could just point to a direction to go..

    6. "For a dynamic system, we have let's say 100 000 of cash capital, and the broker is allowing us to borrow another 100 000. Should we start our calculations of annualized cash volatility target and other values using 90% of the total leveraged amount (2 x 200k x 0.9=180k) or it's not the best way to do it? In general, our goal here is several-fold: we do not want our system to starve, but we also want the capital to work as much as possible, i.e. to have, say, 70% (?) of the leveraged amount(200k*0.7=140k) invested on average, as well as we do not want to get margin-calls when too many trades go against us at the same time (from that we should be guarding in real-time..). It's probably a slightly different\bigger problem, but maybe you could just point to a direction to go.. "

      Yes if you do the calculations in here Leverage factor calculation, but change desired leverage to 140% (and obviously update the rest of the sheet with what you are trading) then you'll achieve what you want.

      Then you will have a system with an average leverage as required.

      Is 70% appropriate? Well as long as you're doing the right rescaling of capital with losses, then you could survive a 30% 'gap' (a fall in account value before you got a chance to rebalance), which unless your annualised vol target is very high is probably safe. You might want to backtest to get a feel for your margin of safety, if you can.

    7. Thanks Rob, I'll definitely try that. But the link appears to be broken at the moment, I cannot access this:

    8. Ok, now it's working, thanks!

  3. Hi Rob. I've just been working through your volatility calculation spreadsheet from Chapter 10 of your book. All the calculations make sense apart from the 25 day moving average volatility column. Correct me if I'm wrong, but shouldn't the calculations start from cell reference H38 and then reference the previous 25 days rather than 24 (cells D14:D38)?

  4. Hi Robert,

    Just wanted to write a couple of questions/comments on your excellent book. But it says I am not to exceed 4096 letters in the comments section. How can I send them to you by mail /authors page...?

    Thanks and please keep up your good work


    1. Best way is to connect with me on linked in, then send an email

  5. Hi Robert,

    just wanted to post some comments/questions on your excellent book. But it says I need to restrict myself to 4096 letters. How can I send them to you by mail, authors page etc.

    Thanks and best regards,

  6. YOur link to the harriman page for your book is broken

    1. Fixed. Thanks very much for pointing that out.

  7. Hi Robert,
    I currently invest about half of my retirement plan in Gary A's Dual Momentum strategy, and half in Meb Faber's GTAA Aggressive 3 strategy. These strategies have very long backtested performance of 16-20% returns, with about 25% max drawdowns. Plus, I only need to check in about once a month to reallocate. Do you think your staunch system trader strategy would have significantly better performance? If so, I could be willing to devote the time it takes for daily updates and dealing with the increased complexity of your system. However, if I could only expect a small bump in performance from your strategy, I would probably stick with what I have. For what it's worth, I have a very high risk tolerance; in fact, I am trying to figure out an economical way to leverage my current strategies. I would be comfortable with 50% or greater drawdown for a boost of 2 or 3 percent in returns. Any advice is appreciated, and congrats on an exceptional book.

    1. Hi Peter. I don't know eithier of those strategies very well. There are a number of reasons it's plausible that the staunch system trader system *could* be better than another strategy:

      - correct position management eg positions adjusted by volatility,
      - more diversified momentum indicator
      - another uncorrelated indicator - carry
      - use of futures (allows leverage, cheaper)
      - more diversified over different asset classes

      Anyone of these things would probably improve your performance.

      My book, as you've hopefully realised, isn't about selling a specific system, but to teach you about what does or does not make a good system. One of the key determinants of what makes a good system is something you can stick to. It sounds like you've found something you can stick to, so if that's halfway decent...

      I probably shouldn't say this, but if the two systems you describe aren't doing anything stupid they might be worth sticking with. I'm hoping that the book has helped give you the critical thinking techniques to work out if these systems are sensible.

      Also hopefully chapter 9 in particular will help you leverage them safely.

      Alternatively maybe you could think about adapting elements of these existing strategies

      Spoiler - I'm writing a book that is much more suitable for long only investments with minimal reallocations...

    2. Can't wait for the next book! Chapter nine is helpful for determining leverage, but my options to employ it are limited in a retirement account. Since I can't use margin, and because I don't feel comfortable with options, I guess the only option would be futures, but a lot of the instruments in my strategies don't have corresponding futures. I use mostly ETFs. I think there are only futures for the standard ones like SP500, but not for the more obscure ETFs, like the emerging markets index.

  8. Hi Rob, you state in your book that one should avoid changing his volatility target. Suppose I have a very high risk tolerance, and a 25 yr investing horizon, and I am using your chapter 15 sustem. Might it make sense to continuously update my annual volatility target to match my latest backtested sharpe ratio? If I go through a losing streak, my sharpe will decrease, reducing my volatility target as it does. I would think this is the closest you could get to Kelly optimal. Sure I would overshoot and undershoot at times, but over such a long time horizon, I feel it would even out in the end.

  9. I really would advise against this, in the strongest possible terms. Lets say you're starting with a 30 year backtest. It wouldn't make much difference to the overall SR as you add data through live trading. Also even with a 30 year backtest your backtested SR is only an estimate, and an estimate with huge uncertainty. Thats why I advise using only a small fraction of full Kelly, and limiting your annual risk to the point where your backtested SR probably wouldn't be affecting your risk target. Finally you're never going to hit your target risk anyway, since volatility isn't perfectly predictable. So this is a lot of effort for little return. If your system returns are mean reverting you'll lose money doing this (see my blog post on trading the account curve). And you're going to incur additional trading costs. There are probably other reasons not to do this but that should be enough to dissuade you.

    1. Thanks for the response, Rob. What maximum fixed volatility would be reasonable for the above? Maybe 50% to match the conservatively projected .5 sharpe? Or would should I really try to stick with a max of half Kelly, even though I have such a long horizon and high risk tolerance?

    2. Half kelly is the absolute maximum you should use.

    3. Thanks, Rob. I ran your chapter 15 python system on 8 instruments, and my backtest shows a sharpe of 0.80. In your posting on small account sizes and diversification, you state that a sharpe of 0.61 can be expected for 8 instruments. Given my high risk tolerance, should I set annual volatility to 40% (half of 0.80) or 30%?

    4. Reread chapter 9 again. You also need to apply a correction factor to reflect the fact that backtested SR will be overstated.

    5. Makes sense: if I apply your recommended adjustment of 75% of backtested sharpe, I come to .60. So, that confirms the maximum reasonable volatility of 30%.

  10. Hi Robert and congratulations for an exceptional read!
    Just a quick question as I am not 100% what your assumption is about the Gaussian normal distribution.

    Do you generally assume that daily *price changes*, or daily *percentage changes* follow such a distribution?

    Thank you!

  11. Hi Rob,

    Your book is very interesting. Congrats for a very structured approach!

    I have one quick question for you. I see that you recommend exiting
    a strategy using a stop loss with a given X. When you are backtesting a strategy, do use such an exit method? I had a quick look at your code on github but I could not find any reference to X for backtesting purposes.

    Thank you!

    1. That's a specific method I recommend for traders who want to have separate entry and exit rules (especially when the entry is discretionary) with discrete trades. But I don't use such a system myself, instead I use a continuously changing forecast (described in chapter 15 of the book).

  12. Hi Rob,

    I really liked your book and now would like to apply your framework to my strategy (and hopefully along the way automate it).
    I would like to know your input on Instrument block. As I trade exchange futures spreads, 1% of one instrument is not really meaningful as spreads can have huge variation in percentage terms...

    Would you have any suggestions on how to define the instrument block by any chance? I was thinking of possibly using the roll yield.

    Thank you!

    1. I think you're not talking about defining a block (which will probably be a single futures contract long+short) but about the price of a block.

      "As I trade exchange futures spreads, 1% of one instrument is not really meaningful as spreads can have huge variation in percentage terms..."

      And of course spreads can go negative.
      Anyway you can just add a constant to the price when working out the volatility, it will still work.

      Consider the worked example at the end of chapter ten. Now let's do it for eurodollar calendar spreads

      Price: Dec 17 / Dec 18 is currently at about $0.30

      Price volatility: is the volatility of the spread. Lets say it's $0.02 a day. Now we add $100 to the price ($0.30) giving us $100.30. The % volatility is around 0.02 / 100.3 = 0.01994% per day

      Instrument block: long 1 contract / short next contract

      Block value: How much do we lose or gain from a 1% move in the price? The price is $100+spread. A 1% move would be a move in the spread of 100.3 * 1% = 1.003 This would cost us 1.003 * $2500 = $2507.50

      instrument currency volatility = block value * price volatility = $2507.50 * 0.01994 = $50 per day

      Then the rest of the formula works as normal.

      In your case it probably makes sense to find this value by using the simplified formula that doesn't use % volatility at all. Instead start with the spread volatility (0.02 points per day), and multiply by the value of a 1 unit spread move ($2500). You get the same answer: $50.

    2. Thank you for coming back to me.
      I have done something similar to your last example (using the points per day) and it seems to work.
      Your first example would give different values depending on the constant that you choose. If you choose 100, it seems to always give a similar result to the second example.
      Why did you choose 1% of the price for the instrument block? Would it work if you defined the instrument block as 1% of the trading capital (and size accordingly)?

    3. The reason is that most people are used to measuring prices in percentage terms. In fact if you look at the calculation the price itself cancels out. So you can use the volatility directly calculated in price differences rather than as a %.

      "Would it work if you defined the instrument block as 1% of the trading capital (and size accordingly)?"


  13. May I ask, how the process of volatility standardisation is done exactly, as I can’t conclude it from the book?

    My guess is: you take a list of trades from a backtest, you normalize the trade-results to percentages 0-100 (how?), you treat this as the probability and calculate the x-value of the normal distribution and you sum this x-values up for every year.

    Then the yearly sums would be the volatil. standardized return to compare the strategy?

    Am I right, or how else would it be done?

    1. Volatility standardisation occurs at different points. Could you be more explicit about what the standardisation is for?

  14. adjusting returns to make strategies compareable and useable on different instruments.
    (Aside from costs what other points would it be used?)

    1. Properly designed all strategies will have the same expected standard deviation of returns, regardless of the instrument. However if that still isn't sufficient then just measure the annualised standard deviation of each instrument *si*, calculate the average *s-*, then multiply all returns by the ratio of *s-*/*si*

    2. so it would be
      returns * (average annualised standard deviation)/(annualised standard deviation)

      You say it can be standardized in the design process already. How would that be done?

    3. The design process in part three of the book produces subsystem returns for each strategy with the same expected risk.

  15. Dear Rob,

    I have bought your book - must say that it’s by far the best I’ve read on systematic trading.

    On this topic, am I right to say that you go for a risk-parity kind of allocation where each asset contribute s equally to portfolio vol? Was this how you came up with handcrafted weights, which are the weights of a portfolio consisting of assets with those correlations?

    And on this point on vol standardisation, you mention that it is used to make sure that the assets we are optimising weights for have the same expected risk, but I don’t quite understand this. Expected risk for bonds are maybe 8% and equities 20%, what do you mean by Col standardising such that they are equal?

    Appreciate the kind explanation pls Rob!

    1. Dear Henry,
      Thanks for the kind words. Remember the best way to show our appreciation to any author is a nice amazon review :-)

      On your specific questions, firstly a portfolio of trading systems with position scaling as I describe will all by construction have the same expected risk, regardless of what the underlying asset is.

      If I was to give them equal weights they would all still contribute to portfolio vol differently because of correlations. Handcrafting gives you a portfolio which roughly has maximum diversification / minimum risk.

      This will also be roughly equal risk contribution, although not necessarily.

  16. Dear Rob,

    Thanks for getting back on this! Yes a nice amazon review will definitely be given - I’’ll get to it soon :-)

    In your book (the portfolio allocation chapter) an example you used was S&P500, NASDAQ and US20Y, and you mentioned that all returns have been volatility standardised to have the same expected standard deviation. I’ve been trying to understand that but still can’t wrap my head around it - if US20Y has annual vol of 8% and returns of 2%, how will changing returns change the expected risk for this asset? Would you mind giving an example with some calculations as to how returns can be volatility standardised to have the same expected standard deviation?

    Thanks Rob!

    1. Just divide all the returns by the relative ratio of the standard deviations. Then all assets will have the same standard deviation, but Sharpe will be preserved

  17. Hi Rob,
    I am wading into the field of systems trading and am in the process of digesting your book.
    There is one concept I am having trouble with: forecasts for staunch system traders. I can understand how a semi-automatic trader should make discretionary forecasts. But how does the system trader forecast in a presumably non-discretionary way? If he uses his trading rule applied to a particular instrument how is this done? To the best of my knowledge trading rules can turn bullish or bearish (i.e. produce buy or sell signals) but they don’t tell you anything about the expected move (or maybe I should use something like the average expected profit generated by the rule?).

    I must be missing something obviuos here.

    Thanks for any help amd congrats on your book.


    1. No, you should construct your trading rules so they also tell you what the expected move is. There is more on this in chapter 7, which I assume you are not up to you yet, but if we take an example of a moving average crossover then we formulate it in a binary way as:

      if moving average fast > moving average slow: buy else sell

      or in a continous way as:

      position is proportional to (moving average fast - moving average slow)

    2. Thanks Rob. My thinking was pre-conditioned by the assumption that you establish or reverse 100% of your position when the two moving averages cross. I now understand that your EWMAC rule has a forecast of zero (i.e. no position) when the moving averages cross and that the forecast (and hence position with volatility unchanged) is proportional to their gap. Correct?

  18. This comment has been removed by a blog administrator.

  19. Hello Rob,
    I have recreated your EWMAC-trading rule with forecast from your book. If I have done it correctly, then the highest reading of the forecast for a strong buy (+20) is on top of the peak most of the time (unless there is a huge, straight trend-move). The largest position there would also be a large loss, maybe larger then the wins before. How would you handle this?

    I also found it very interesting, which trading rules are used by professionals in the industry, like the examples in your book. Can you maybe point me to resources with more informations and examples? I would highly appreciate your answer and sources to learn more.

    1. There is some (very weak) statistical evidence that the effect you describe exists (naturally the human brain is drawn to anecdotal evidence like you have seen, but it's important to do things properly). The correct way to handle it is to reduce your forecast when it is very strong. However it is a very weak effect and if you try and properly fit this out of sample you find it has almost no effect on performance (at best), and at worst destroys performance (if you overfit).

      I'd recommend reading:

    2. Thank you very much for your recommendations.

      I have uploaded a picture with an example of EWMAC 8,32 as an indicator on Gold:
      That’s the effect, which I mean. The strongest buy signal is on top of the peak. The same for sells. Maybe a natural effect of trend following.

    3. Would you say, this example looks correctly?:

    4. To be honest, I have no way of checking if it is or not.

  20. Apologies if this is the wrong place, but I couldn't find a reference to "speed limit" anywhere else on the blog. In Chapter 12 of Systematic Trading you introduce the concept of a speed limit as a means of controlling costs (lovely idea). My question is which value of speed limit do you actually use when automating your trade placement? The most recent value?

    The reason I ask is that when I coded the speed limit up (using a 36 pd EMA for the SD of returns)), I noticed a wide spread of values over time, dependent on volatility. e.g. (using Oanda daily data for the Dow 30) I get a current max number of trades per year of 182 (recent equity market volatility), but with a one year lookback the min is 25, the max is 234, the median is 34 and the SD nearly 50.

    Given that volatility can significantly change the speed limit value during the life of an open trade, I'm wondering whether either a longer EMA or using the median speed limit might be better than using just the latest speed limit value? Many thanks. (And apologies if this comment shows up as being from Unknown, I keep logging into my Google account to post, but my name (Andy) never seems to show up.

    1. The speed limit shouldn't be 'number of trades per time period'; it is expressed in the book as 'maximum expected cost in Sharpe Ratio units'. This should be fairly similar over time.

    2. Duh. Sorry, I used the wrong terminology. I was alluding to this passage:
      "So dividing 0.13 by the standardised cost of an instrument will give you the turnover speed limit for that instrument. For a relatively cheap futures market like the Euro Stoxx this implies a maximum turnover of 0.13 ÷ 0.002 = 65 round trips per year"

      I'm seeing a lot of variation in the maximum number of round trips per year.

  21. Not sure if the formatting will work, but below are some further examples of what I'm referring to re max round trips per year. Would be v interested in your view regarding the best number to use. Many thanks.

    Min Max Mean Median SD Latest
    AUD/CAD 20 29 24.627 25 2.5863 27
    AUD/JPY 42 52 47.6032 48 2.4432 43
    AUD/USD 35 46 38.9087 39 2.594 43
    BCO/USD 34 68 45.2659 43 9.2923 58
    CHF/JPY 31 42 35.7262 35 2.8815 32
    CORN/USD 4 6 4.8532 5 0.5974 5
    DE10YB/EUR 35 46 39.3135 39 2.6602 36
    DE30/EUR 80 122 101.8135 103 10.1852 116
    EU50/EUR 25 33 29.0913 29 2.0733 31
    EUR/AUD 35 49 39.0952 38 3.3977 44
    EUR/CAD 34 44 38.7103 38 2.5294 40
    EUR/CHF 14 30 17.6944 15 4.9866 28
    EUR/GBP 41 50 44.6468 45 1.7533 45
    EUR/JPY 49 66 56.2857 57 4.0136 60
    EUR/NZD 22 37 26.119 25 4.1238 34
    EUR/USD 46 57 50.25 50 2.3458 53

    1. I'd use an average of the 'mean' across markets weighted by how much data you have for each market (so BCO would get less of a weight than AUD/USD).

      (Interesting selection of markets there...)

  22. Great - many thanks.

    :-) that's just the first few from the list for pre-screening

  23. Hello Rob,

    congratulations on your well done book, which I haven't yet finished reading.

    I'd like to ask you a question on the instrument value volatility, which is relevant for position sizing. You defined it in your book as instrument currency vol times exchange rate. This measure of risk seems to disregard fx vol. In the volatility scalar calculation, wouldn't it make sense to divide the daily cash volatility target by the block value times the price volatility after price has been daily converted to one's trading capital currency?

    Thanks in advance for your comments. Best,


    1. I don't think that is a stupid thing to do, but it will make almost no difference in practice

  24. Hi Rob,

    I have looked though the blog but so far haven't found a section on currency pairs as instruments. I am looking at using your framework on several systematic strategies that would be run on all the G10 currency pairs (45 in all). I have a couple of questions that arise from trying this so far:

    a) With this number of instruments and several strategies, I often get signals close to zero for some of the pairs (instruments) from specific strategies, and when they are combined into combined forecasts there are many that are close to zero, so have weak signals. This may be fine, in so far as it reflects the signals accurately, but I am wondering whether to apply a minimum forecast value threshold (zero otherwise), and also how to treat a zero signal, as left included it will have the effect of reducing the combined score.

    b) At the portfolio construction stage, again because I have a large number of instruments, I am left with small weights applied to each instrument to get the total position. With 1/n and n=45, I get 2.2% multipliers. Two ideas I had were i) try to group within developed FX using correlations, similarly to your handcrafting technique, or ii) try to combine all the currency pair trades into a smaller number of currencies vs the USD, and reduce to 10 instruments. i) could work, although I would need to allow negative correlations given pairs can be quoted with a given currency on different sides by convention. ii) seems to risk changing the basis on which the original forecast signals were generated which doesn't sound ideal.

    In some ways this is a problem that would crop up when using a large number of instruments in general, as the diversification multiplier is capped at 2.5 and in any case my calculated multiplier is around 1.5. Have you come across any of this yourself, or am I missing something?

    I haven't seen anything in the first book on this question. I haven't read all of the second one yet so it may be covered there.

    Many thanks,


    1. The problem of a large number of instruments with sparse forecasts isn't easy to solve. Because of the sparse forecasts you end up with relatively low average risk and therefore estimated high diversification multipliers (IDM).

      If you follow my advice to cap the IDM then your average risk will be lower than the risk target you are aiming for.

      You might want to apply a non linear mapping to forecasts (like this; that will mean you will end up with a smaller number of larger forecasts which solves one problem (having a number of tiny forecasts that don't translate into meaningful positions), but won't change the diversification multiplier issue (in fact by making the forecasts sparser it makes it worse). However it is a nice way to get extra diversification into a portfolio that would otherwise be unable to do so.

      You can run the system with a larger diversification multiplier (above the cap) so that your average risk is correct; but that means that on occasions when you do have a higher than average number of large positions you will be running unusually high risk. One option is to cap your risk when this happens, and therefore proportionally reduce your positions OR apply a higher threshold than normal in any non linear mapping. Doing this will weaken the relationship between forecast strength and expected risk when forecasts are unusually high.