Tuesday, December 26, 2023

Download the Lion Gold MT4/MT5 EA for FREE [Updated Version].

Download the Lion Gold MT4/MT5 EA for FREE [Updated Version].


The Lion Gold MT4 EA harnesses the inherent volatility of gold as a trading pair, employing a strategy centered on detecting and responding to breakouts at daily support and resistance levels. This straightforward yet highly effective approach is particularly well-suited for the notable volatility of the gold market. By concentrating on these pivotal points, the EA capitalizes on significant market movements, enabling traders to profit from substantial price fluctuations.

Distinguished by its versatility, the Lion EA does not rely on a singular strategy; instead, it incorporates seven distinct built-in strategies that operate synergistically. This multifaceted approach ensures adaptability to diverse market conditions, fostering a steady growth trajectory for investments. While acknowledging that losses are inherent in trading, these varied strategies ensure a controlled and transparent trading process without any manipulation.

A noteworthy aspect of the Gold MT5 EA is its steadfast commitment to risk management. The EA deliberately steers clear of high-risk trading styles such as grids and martingale, known for their potential to amplify losses. Opting for a more conservative approach, it prioritizes sustainable trading tactics to safeguard the long-term financial well-being of the trader.

Every trade executed by the Lion MT5 EA is accompanied by predefined take profit (TP) and stop loss (SL) levels, establishing clear targets and limits for each position. Additionally, the EA employs a trailing stop loss strategy: as a trade moves favorably, the stop loss is adjusted to secure profits while allowing for further growth. This dynamic approach serves to minimize risks while maximizing potential gains.


Before deploying the Gold MT4 Robot in a live trading account, it is advisable to conduct a week-long testing period in a demo account. Familiarize yourself with the robot's operations and functionalities to ensure a thorough understanding before transitioning to real trading.


Gold MT4 EA Recommendations:

  • Maintain a minimum account balance of $200.
  • For accounts with balances between $200 and $400, consider employing only strategies 2, 4, and 7, as they exhibit the lowest drawdowns.
  • While specifically designed for XAUUSD, the EA is adaptable to any currency pair.
  • While it performs best on the Daily TimeFrame, it remains effective across various TimeFrames.
  • Optimize stability by running the EA continuously on a dependable VPS; we recommend using a reliable FOREX VPS service such as FXVM.
  • Although the EA is not affected by spread and slippage, it is advisable to choose a reputable ECN broker. (Explore suitable brokers for your needs here.)

Thursday, December 21, 2023

Judging the Quality of Indicators.

Judging the Quality of Indicators.

In my previous post I said I was trying to develop new indicators from the results of my new PositionBook optimisation routine. In doing so, I need to have a methodology for judging the quality of the indicator(s). In the past I created a Data-Snooping-Tests-GitHub which contains some tests for statistical significance testing and which, of course, can be used on these new indicators. Additionally, for many years I have had a link to tssb on this blog from where a free testing program, VarScreen, and its associated manual are available. Timothy Masters also has a book, Testing and Tuning Market Trading Systems, wherein there is C++ code for an Entropy test, an Octave compiled .oct version of which is shown in the following code box.

#include "octave oct.h"
#include "octave dcolvector.h"
#include "cmath"
#include "algorithm"

DEFUN_DLD ( entropy, args, nargout,
"-*- texinfo -*-\n\
@deftypefn {Function File} {entropy_value =} entropy (@var{input_vector,nbins})\n\
This function takes an input vector and nbins and calculates\n\
the entropy of the input_vector. This input_vector will usually\n\
be an indicator for which we want the entropy value. This value ranges\n\
from 0 to 1 and a minimum value of 0.5 is recommended. Less than 0.1 is\n\
serious and should be addressed. If nbins is not supplied, a default value\n\
of 20 is used. If the input_vector length is < 50, an error will be thrown.\n\
@end deftypefn" )

{
octave_value_list retval_list ;
int nargin = args.length () ;
int nbins , k ;
double entropy , factor , p , sum ;

// check the input arguments
if ( args(0).length () < 50 )
{
error ("Invalid 1st argument length. Input is a vector of length >= 50.") ;
return retval_list ;
}

if ( nargin == 1 )
{
nbins = 20 ;
}

if ( nargin == 2 )
{
nbins = args(1).int_value() ;
}
// end of input checking

ColumnVector input = args(0).column_vector_value () ;
ColumnVector count( nbins ) ;
double max_val = *std::max_element( &input(0), &input( args(0).length () - 1 ) ) ;
double min_val = *std::min_element( &input(0), &input( args(0).length () - 1 ) ) ;
factor = ( nbins - 1.e-10 ) / ( max_val - min_val + 1.e-60 ) ;

for ( octave_idx_type ii ( 0 ) ; ii < args(0).length () ; ii++ ) {
k = ( int ) ( factor * ( input( ii ) - min_val ) ) ;
++count( k ) ; }

sum = 0.0 ;
for ( octave_idx_type ii ( 0 ) ; ii < nbins ; ii++ ) {
if ( count( ii ) ) {
p = ( double ) count( ii ) / args(0).length () ;
sum += p * log ( p ) ; }
}

entropy = -sum / log ( (double) nbins ) ;

retval_list( 0 ) = entropy ;

return retval_list ;

} // end of function

This calculates the information content, Entropy_(information_theory), of any indicator, the value for which ranges from 0 to 1, with a value of 1 being ideal. Masters suggests that a minimum value of 0.5 is acceptable for indicators and also suggests ways in which the calculation of any indicator can be adjusted to improve its entropy value. By way of example, below is a plot of an "ideal" (blue) indicator, which has values uniformly spread across its range
with an entropy value of 0.9998. This second plot shows a "good" indicator, which has an

entropy value of 0.7781 and is in fact just random, normally distributed values with a mean of 0 and variance 1. In both plots, the red indicators fail to meet the recommended minimum value, both having entropy values of 0.2314.

It is visually intuitive that in both plots the blue indicators convey more information than the red ones. In creating my new PositionBook indicators I intend to construct them in such a way as to maximise their entropy before I progress to some of the above mentioned tests.

Wednesday, November 22, 2023

Update to PositionBook Chart - Revised Optimisation Method

Update to PositionBook Chart - Revised Optimisation Method

Just over a year ago I previewed a new chart type which I called a "PositionBook Chart" and gave examples in this post and this one. These first examples were based on an optimisation routine over 6 variables using Octave's fminunc function, an unconstrained minimisation routine. However, I was not 100% convinced that the model I was using for the loss/cost function was realistic, and so since the above posts I have been further testing different models to see if I could come up with a more satisfactory model and optimisation routine. The comparison between the original model and the better, newer model I have selected is indicated in the following animated GIF, which shows the last few day's action in the GBPUSD forex pair. 

The old model is figure(200), with the darker blue "blob" of positions accumulated at the lower, beginning of the chart, and the newer model, figure(900), shows accumulation throughout the uptrend. The reasons I prefer this newer model are:

  • 4 of the 6 variables mentioned above (longs above and below price bar range, and shorts above and below price bar range) are theoretically linked to each other to preserve their mutual relationships and jointly minimised over a single input to the loss/cost function, which has a bounded upper and lower limit. This means I can use Octave's fminbnd function instead of fminunc. The minimisation objective is the minimum absolute change in positions outside the price bar range, which has a real world relevance as compared to the mean squared error of the fminunc cost function.
  • because fminunc is "unconstrained" occasionally it would converge to unrealistic solutions with respect to position changes outside the price bar range. This does not happen with the new routine.
  • once the results of fminbnd are obtained, it is possible to mathematically calculate the position changes within the price bar range exactly, without needing to resort to any optimisation routine. This gives a zero error for the change which is arguably the most important.
  • the results from the new routine seem to be more stable in that indicators I am trying to create from them are noticeably less erratic and confusing than those created from fminunc results.
  • finally, fminbnd over 1 variable is much quicker to converge than fminunc over 6 variables.
The second last mentioned point, derived indicators, will be the subject of my next post.

Sunday, August 20, 2023

Currency Strength Revisited

Currency Strength Revisited

Recently I responded to a Quantitative Finance forum question here, where I invited the questioner to peruse certain posts on this blog. Apparently the posts do not provide enough information to fully answer the question (my bad) and therefore this post provides what I think will suffice as a full and complete reply, although perhaps not scientifically rigorous.

The original question asked was "Is it possible to separate or decouple the two currencies in a trading pair?" and I believe what I have previously described as a "currency strength indicator" does precisely this (blog search term ---> https://dekalogblog.blogspot.com/search?q=currency+strength+indicator). This post outlines the rationale behind my approach.

Take, for example, the GBPUSD forex pair, and further give it a current (imaginary) value of 1.2500. What does this mean? Of course it means 1 GBP will currently buy you 1.25 USD, or alternatively 1 USD will buy you 1/1.25 = 0.8 GBP. Now rather than write GBPUSD let's express GBPUSD as a ratio thus:- GBP/USD, which expresses the idea of "how many USD are there in a GBP?" in the same way that 9/3 shows how many 3s there are in 9. Now let's imagine at some time period later there is a new pair value, a lower case "gbp/usd" where we can write the relationship

                    (1)     ( GBP / USD ) * ( G / U ) = gbp / usd

to show the change over the time period in question. The ( G / U ) term is a multiplicative term to show the change in value from old GBP/USD 1.2500 to say new value gbp/usd of 1.2600, 

e.g.                ( G / U ) == ( gbp / usd ) / ( GBP / USD ) == 1.26 / 1.25 == 1.008

from which it is clear that the forex pair has increased by 0.8% in value over this time period. Now, if we imagine that over this time period the underlying, real value of USD has remained unchanged this is equivalent to setting the value U in ( G / U ) to exactly 1, thereby implying that the 0.8% increase in the forex pair value is entirely attributable to a 0.8% increase in the underlying, real value of GBP, i.e. G == 1.008. Alternatively, we can assume that the value of GBP remains unchanged,

 e.g.                G == 1, which means that U == 1 / 1.008 == 0.9921

which implies that a ( 1 - 0.9921 ) == 0.79% decrease in USD value is responsible for the 0.8% increase in the pair quote.

Of course, given only equation (1) it is impossible to solve for G and U as either can be arbitrarily set to any number greater than zero and then be compensated for by setting the other number such that the constant ( G / U ) will match the required constant to account for the change in the pair value.

However, now let's introduce two other forex pairs (2) and (3) and thus we have:-

                    (1)     ( GBP / USD ) * ( G / U ) = gbp / usd

                    (2)     ( EUR / USD ) * ( E / U ) = eur / usd

                    (3)     ( EUR / GBP ) * ( E / G ) = eur / gbp

We now have three equations and three unknowns, namely G, E and U, and so this system of equations could be laboriously, mathematically solved by substitution. 

However, in my currency strength indicator I have taken a different approach. Instead of solving mathematically I have written an error function which takes as arguments a list of G, E, U, ... etc. for all currency multipliers relevant to all the forex quotes I have access to, approximately 47 various crosses which themselves are inputs to the error function, and this function is supplied to Octave's fminunc function to simultaneously solve for all G, E, U, ... etc. given all forex market quotes. The initial starting values for all G, E, U, ... etc. are 1, implying no change in values across the market. These starting values consistently converge to the same final values for G, E, U, ... etc for each separate period's optimisation iterations.

Having got all G, E, U, ... etc. what can be done? Well, taking G for example, we can write

                    (4)     GBP * G = gbp

for the underlying, real change in the value of GBP. Dividing each side of (4) by GBP and taking logs we get

                    (5)     log( G ) = log( gbp / GBP )

i.e. the log of the fminunc returned value for the multiplicative constant G is the equivalent of the log return of GBP independent of all other currencies, or as the original forum question asked, the (change in) value of GBP separated or decoupled the from the pair in which it is quoted.

Of course, having the individual log returns of separated or decoupled currencies, there are many things that can be done with them, such as:-

  • create indices for each currency
  • apply technical analysis to these separate indices
  • intermarket currency analysis
  • input to machine learning (ML) models
  • possibly create new and unique currency indicators

Examples of the creation of "alternative price charts" and indices are shown below

where the black line is the actual 10 minute closing prices of GBPUSD over the
last week (13th to 18th August) with the corresponding GBP price (blue line) being the "alternative" GBPUSD chart if U is held at 1 in the ( G / U ) term and G allowed to be its derived, optimised value, and the USD price (red line) being the alternative chart if G is held at 1 and U allowed to be its derived, optimised value.

This second chart shows a more "traditional" index like chart

where the starting values are 1 and both the G and U values take their derived values. As can be seen, over the week there was upwards momentum in both the GBP and USD, with the greater momentum being in the GBP resulting in a higher GBPUSD quote at the end of the week. If, in the second chart the blue GBP line had been flat at a value of 1 all week, the upwards momentum in USD would have resulted in a lower week ending quoted value of GBPUSD, as seen in the red USD line in the first chart. Having access to these real, decoupled returns allows one to see through the given, quoted forex prices in the manner of viewing the market as though through X-ray vision. 

I hope readers find this post enlightening, and if you find some other uses for this idea, I would be interested in hearing how you use it.
 

Tuesday, May 30, 2023

Friday, March 3, 2023

Applying Corrective AI to Daily Seasonal Forex Trading

Applying Corrective AI to Daily Seasonal Forex Trading


By
Sergei Belov, Ernest
Chan, Nahid Jetha, and
Akshay Nautiyal


ABSTRACT


We applied
Corrective
AI (Chan, 2022) to a trading model that takes
advantage of the intraday seasonality of forex returns. Breedon and Ranaldo
(2012)  observed that foreign currencies
depreciate vs. the US dollar during their local working hours and appreciate
during the local working hours of the US dollar. We first backtested the
results of Breedon and Ranaldo on recent EURUSD data from September 2021 to
January 2023 and then applied Corrective AI to this trading strategy to achieve
a significant increase in performance.


Breedon and Ranaldo (2012) described a trading strategy that shorted EURUSD during European working hours (3 AM ET to 9 AM ET,where ET denotes the local time in New York, accounting for daylight savings) and bought EURUSD during US working hours (11 AM ET to 3 PM ET). The rationale is that large-scale institutional buying of the US dollar takes place during European working hours to pay global invoices and the reverse happens during US working hours. Hence this effect is also called the “invoice effect".

There is some supportive evidence for the time-of-the-day patterns in various measures of the forex market like volatility (see Baille and Bollerslev(1991), or Andersen and Bollerslev(1998)), turnover (see Hartman (1999), or Ito and Hashimoto(2006)), and return (see Cornett(1995), or Ranaldo(2009)).  Essentially,local currencies depreciate during their local working hours for each of these measures and appreciate during the working hours of the United States.

Figure 1 below describes the average hourly return of each hour in the day over a period starting from 2019-10-01 17:00 ET to 2021-09-01 16:00 ET. It reveals the pattern of returns in EURUSD. The return
pattern in the above-described “working hours'' reconciles with the hypothesis of a prevalent “invoice effect” broadly. Returns go down during European working and up during US working hours.

Figure
1: Average EURSUD return by time of day (New York time)

As this strategy was published in 2012, it offers ample time for true out-of-sample testing. We collected 1-minute bar data of EURUSD from Electronic Broking Services (EBS) and performed a backtest over the out-of-sample period October 2021-January 2023. The Sharpe Ratio of the strategy in this period is  0.88, with average annual returns of 3.5% and a maximum drawdown of -3.5%. The alpha of the strategy apparently endured. (For the purpose of this article, no transaction costs are included in the backtest because our only objective is to compare the performances with and without Corrective AI, not to determine if this trading strategy is viable in live production.)

Figure 2 below shows the equity curve (“growth of $1”) of the strategy during the aforementioned out-of-sample period. The cumulative returns during this period are just below 8%. We call this the “Primary” trading strategy, for reasons that will become clear below.

Figure

2: Equity curve of Primary trading strategy in out-of-sample period


What is Corrective AI?

Suppose
we have a trading model (like the Primary trading strategy described above) for setting the side of the bet (long or short). We just need to learn the size of that bet, which includes the possibility of no bet at all (zero sizes). This is a situation that practitioners face regularly. A machine learning algorithm (ML) can be trained to determine that. To emphasize, we do not want the ML algorithm to learn or predict the side, just to tell us what is the appropriate size.

We call this problem meta-labeling (Lopez de Prado, 2018) or Corrective AI (Chan, 2022) because we want to build a secondary ML model that learns how to use a primary trading model.

We train an ML algorithm to compute the “Probability of Profit” (PoP) for the next minute-bar. If the PoP is greater than 0.5, we will set the bet size to 1; otherwise we will set it to 0. In other words, we adopt the step function as the bet sizing function that takes PoP as an input and gives the bet size as an
output, with the threshold set at 0.5.  This bet sizing function decides whether to take the bet or pass, a
purely binary prediction.

The training period was from 2019-01-01 to 2021-09-30 while the out-of-sample test period was from 2021-10-01 to 2023-01-15, consistent with the out-of-sample period we reported for the Primary trading strategy. The model used to train ML algorithm was done using the predictnow.ai Corrective AI (CAI) API, with more than a hundred pre-engineered input features (predictors). The underlying learning algorithm is a gradient-boosting decision tree.

After applying Corrective AI, the Sharpe Ratio of the strategy in this period is 1.29  (an increase of 0.41), with average annual returns of 4.1% (an increase of 0.6%)  and a maximum drawdown of -1.9%
(a decrease of 1.6%). The alpha of the strategy is significantly improved.

The equity curve of the Corrective AI filtered secondary model signal can be seen in the figure below.

Figure
3: Equity curve of Corrective AI model  in out-of-sample period


Features used to train the Corrective AI model include technical indicators generated from indices, equities, futures, and options markets. Many of these features were created using Algoseek’s high-frequency futures and equities data. More discussions of these features can be found in (Nautiyal & Chan, 2021).

Conclusion:

By applying Corrective AI to the time-of-the-day Primary strategy, we were able to improve the Sharpe ratio and reduce drawdown during the out-of-sample backtest period. This aligns with observations made in the literature on meta-labeling for our primary strategies. The Corrective AI model's signal filtering capabilities do enhance performance in specific scenarios.

Acknowledgment

We are grateful to Chris Bartlett of Algoseek, who generously provided much of the high-frequency data for our feature engineering in our Corrective AI system. We also thank Pavan Dutt for his assistance with feature engineering and to Jai Sukumar for helping us use the Predictnow.ai CAI API. Finally, we express our appreciation to Erik MacDonald and Jessica Watson for their contributions in explaining this technology to Predictnow.ai’s clients

References

Breedon,
F., & Ranaldo, A. (2012, April 3). Intraday
Patterns in FX Returns and Order Flow
. https://ssrn.com/abstract=2099321

Chan,
E. (2022, June 9). What is Corrective AI?
PredictNow.ai. Retrieved February 23, 2023, from
https://predictnow.ai/what-is-corrective-ai/

Lopez
de Prado, M. (2018). Advances in
Financial Machine Learning
. Wiley.

Nautiyal,
A., & Chan, E. (2021). New Additions
to the PredictNow.ai Factor Zoo
. PredictNow.ai. Retrieved February 28,
2023, from https://predictnow.ai/new-additions-to-the-predictnow-ai-factor-zoo/

Tuesday, February 28, 2023

Kalman Filter and Sensor Fusion.

Kalman Filter and Sensor Fusion.

In the Spring of 2012 and again in the Spring of 2019 I posted a series of posts about the Kalman Filter, which readers can access via the blog archive on the right. In both cases I eventually gave up those particular lines of investigation because of disappointing results. This post is the first in a new series about using the Kalman Filter for sensor fusion, which I had known of before, but due to the paucity of clear information about this online I had never really investigated. However, my recent discovery of this Github and its associated online tutorial has inspired me to a third attempt at using Kalman Filters. What I am going to attempt to do is use the idea of sensor fusion to fuse the output of several functions I have coded in the past, which each extract the dominant cycle from a time series, to hopefully obtain a better representation of the "true underlying cycle."

The first step in this process is to determine the measurement noise covariance or, in Kalman Filter terms, the "R" covariance matrix. To do this, I have used the average of two of the outputs from the above mentioned functions to create a new cycle and similarly used two extracted trends (price minus these cycles) averaged to get a new trend. The new cycle and new trend are simply added to each other to create a new price series which is almost identical to the original price series. The screenshot below shows a typical cycle extract,

where the red cycle is the average of the other two extracted cycles, and this following screenshot shows the new trend in red plus the new price alongside the old price (blue and black respectively).
Having created a time series thus with known trend and cycle, it is a simple matter to run my cycle extractor functions on this new price, compare the outputs with the known cyclic component of price and calculate the variance of the errors to get the R covariance matrices for 14 different currency crosses.

More in due course.

 

Friday, November 18, 2022

PositionBook Chart Example Trade

PositionBook Chart Example Trade

As a quick follow up to my previous post I thought I'd show an example of how one could possibly use my new PositionBook chart as a trade set-up. Below is the USD_CHF forex pair for the last two days

showing the nice run-up yesterday and then the narrow range of Friday's Asian session.

The tentative set-up idea is to look for such a narrow range and use the colour of the PositionBook chart in this range (blue for a long) to catch or anticipate a breakout. The take profit target would be the resistance suggested by the horizontal yellow bar in the open orders chart (overhead sell orders) more or less at Thursday's high.

I decided to take a really small punt on this idea but took a small loss of 0.0046 GBP
as indicated in the above Oanda trade app. I entered too soon and perhaps should have waited for confirmation (I can see a doji bar on the 5 minute chart just after my stop out) or had the conviction to re-enter the trade after this doji bar. The initial trade idea seems to have been sound as the profit target was eventually hit. This could have been a nice 4/5/6 R-multiple profitable trade.😞

Friday, November 11, 2022

A New PositionBook Chart Type

A New PositionBook Chart Type

It has been almost 6 months since I last posted, due to working on a house renovation. However, I have still been thinking about/working on stuff, particularly on analysis of open position ratios. I had tried using this data as features for machine learning, but my thinking has evolved somewhat and I have reduced my ambition/expectation for this type of data.

Before I get into this I'd like to mention Trader Dale (I have no affiliation with him) as I have recently been following his volume profile set-ups, a screenshot of one being shown below.

This shows recent Wednesday action in the EUR_GBP pair on a 30 minute chart. The flexible volume profile set-up Trader Dale describes is called a Volume Accumulation Set-up which occurs immediately prior to a big break (in this case up). The whole premise of this particular set-up is that the volume accumulation area will be future support, off of which price will bounce, as shown by the "hand drawn" lines. Below is shown my version of the above chart
with a bit of extra price action included. The horizontal yellow lines show the support area.

Now here is the same data, but in what I'm calling a PositionBook chart, which uses Oanda's Position Level data downloaded via their API.

The blue (red) horizontal lines show the levels at which traders are net long (short) in terms of positions actually entered/held. The brighter the colours the greater the difference between the longs/shorts. It is obvious that the volume accumulation set-up area is showing a net accumulation of long positions and this is an indication of the direction of the anticipated breakout long before it happens. The Trader Dale set-up presumes an accumulation of longs because of the resultant breakout direction and doesn't seem to provide an opportunity to participate in the breakout itself!

The next chart shows the action of the following day and a bit where the price does indeed come back down to the "support" area but doesn't result in an immediate bounce off the support level. The following order level chart perhaps shows why there was no bounce - the relative absence of open orders at that level.

The equivalent PositionBook chart, including a bit more price action,
shows that after price fails to bounce off the support level it does recover back into it and then even more long positions are accumulated (the darker blue shade) at the support level during the London open, again allowing one to position oneself for the ensuing rise during the London morning session, followed by another long accumulation during the New York opening session for a following leg up into the London close (the last vertical red line).

This purpose of this post is not to criticise the Trader Dale set-up but rather to highlight the potential value-add of these new PositionBook charts. They seem to hold promise for indicating price direction and I intend to continue investigating/improving them in the coming weeks.

More in due course.

Friday, October 7, 2022

Conditional Portfolio Optimization: Using machine learning to adapt capital allocations to market regimes

Conditional Portfolio Optimization: Using machine learning to adapt capital allocations to market regimes

By Ernest Chan, Ph.D., Haoyu Fan, Ph.D., Sudarshan Sawal, and Quentin Viville, Ph.D.


Previously on this blog, we wrote about a machine-learning-based parameter optimization technique we invented, called Conditional Parameter Optimization (CPO). It appeared to work well on optimizing the operating parameters of trading strategies, but increasingly, we found that its greatest power lies in its potential to optimize portfolio allocations. We call this Conditional Portfolio Optimization (which fortuitously shares the same acronym).


Let’s recap what Conditional Parameter Optimization is. Traditionally, optimizing the parameters of any business process (such as a trading strategy) is a matter of finding out what parameters give an optimal outcome over past data. For example, setting a stop loss of 1% gave the best Sharpe ratio for a trading strategy backtested over the last 10 years. Or running the conveyor belt at 1m per minute led to the lowest defect rate in a manufacturing process. Of course, the numerical optimization procedure can become quite complicated based on a number of different factors. For example, if the number of parameters is large, or if the objective function that relates the parameters to the outcome is nonlinear, or if there are numerous constraints on the parameters. There are already standard methods to handle these difficulties. 


What concerns us at PredictNow.ai, is when the objective function is not only nonlinear, but also depends on external time varying and stochastic conditions. In the case of a trading strategy, the optimal stop loss may depend on the market regime, which may not be clearly defined. In the case of a manufacturing process, the optimal conveyor belt rate may depend on dozens of sensor readings. Such objective functions mean that traditional optimization methods do not usually give the optimal results under a particular set of external conditions.Furthermore, even if you specify that exact set of conditions, the outcome is not deterministic. What better method than machine learning to solve this problem!


By using machine learning,  we can approximate this objective function using a neural network, by training its many nodes using historical data. (Recall that a neural network is able to approximate almost any function, but you can use many other machine learning algorithms instead of neural networks for this task). The inputs to this neural network will not only include the parameters that we originally set out to optimize, but also the vast set of features that measure the external conditions. For example, to represent a “market regime”, we may include market volatility, behaviors of different market sectors, macroeconomic conditions, and many other input features. To help our clients efficiently run their models, Predictnow.ai provides hundreds of such market features. The output of this neural network would be the outcome you want to optimize. For example, maximizing the future 1-month Sharpe ratio of a trading strategy is a typical outcome. In this case you would feed historical training samples to the neural network that include the trading parameters, the market features, plus the resulting forward 1-month Sharpe ratio of the trading strategy as “labels” (i.e. target variables). Once trained, this neural network can then predict the future 1-month Sharpe ratio based on any hypothetical set of trading parameters and the current market features. 


With this method, we “only need” to try different sets of hypothetical parameters to see which gives the best Sharpe ratio and adopt that set as the optimal. We put “only need” in quotes because of course if the number of parameters is large, it can take very long to try out different sets of parameters to find the optimal. Such is the case when the application is portfolio optimization, where the parameters represent the capital allocations to different components of a portfolio. These components could be stocks in a mutual fund, or trading strategies in a hedge fund. For a portfolio that holds S&P 500 stocks, for example, there will be up to 500 parameters. In this case, during the training process, we are supposed to feed into the neural network all possible combinations of these 500 parameters, plus the market features, and find out what the resulting 5- or 20-day return, or Sharpe ratio, or whatever performance metric we want to maximize. All possible combinations? If we represent the capital weight allocated to each stock as w ∈ [0, 1], assuming we are not allowing short positions, the search space has w500=[0, 1]500 combinations, even with discretization, and our computer will need to run till the end of the universe to finish. Overcoming this curse of dimensionality is one of the major breakthroughs the Predictnow.ai team has accomplished with Conditional Portfolio Optimization.


To measure the value Conditional Portfolio Optimization adds, we need to compare it with alternative portfolio optimization methods. The default method is Equal Weights: applying equal capital allocations to all portfolio components. Another simple method is the Risk Parity method, where the capital allocation to each component is inversely proportional to its returns’ volatility. It is called Risk Parity because each component is supposed to contribute an equal amount of volatility, or risk, to the overall portfolio’s risk. This assumes zero correlations among the components’ returns, which is of course unrealistic. Then there is the Markowitz method, also known as Mean-Variance optimization. This well-known method, which earned Harry Markowitz a Nobel prize, maximizes the Sharpe ratio of the portfolio based on the historical means and covariances of the component returns. The optimal portfolio that has the maximum historical Sharpe ratio is also called the tangency portfolio. I wrote about this method in a previous blog post. It certainly doesn’t take into account market regimes or any market features. It is also a vagrant violation of the familiar refrain, “Past Performance is Not Indicative of Future Results”, and is known to produce all manners of unfortunate instabilities (see here or here). Nevertheless, it is the standard portfolio optimization method that most asset managers use. Finally, there is the Minimum Variance portfolio, which uses Markowitz’s method not to maximize the Sharpe ratio, but to minimize the variance (and hence volatility) of the portfolio. Even though this does not maximize its past Sharpe ratio, it often results in portfolios that achieve better forward Sharpe ratios than the tangency portfolio! Another case of “Past Performance is Not Indicative of Future Results”.


Let’s see how our Conditional Portfolio Optimization method stacks up against these conventional methods.  For an unconstrained optimization of the S&P 500 portfolio, allowing for short positions and aiming to maximize its 7-day forward Sharpe ratio, 



Method

Sharpe Ratio

Markowitz

0.31

CPO

0.96


(These results are over an out-of-sample period from July 2011 to June 2021, and the universe of stocks for the portfolio are those that have been present in the SP 500 index for at least 1 trailing month. The Sharpe Ratio we report in this and the following tables are all annualized). CPO improves the Sharpe ratio over the Markowitz method by a factor of 3.1.


Then we test our CPO performs for an ETF (TSX: MESH) given the constraints that we cannot short any stock, and the weight w of each stock obeys w ∈ [0.5%, 10%],



Period

Method

Sharpe Ratio

CAGR

2017-01 to 2021-07

Equal Weights

1.53

43.1%

Risk Parity

1.52

39.9%

Markowitz

1.64

47.2%

Minimum Variance

1.56

38.3%

CPO

1.62

43.6%

2021-08 to 2022-07

Equal Weights

-0.76

-30.6%

Risk Parity

-0.64

-22.2%

Markowitz

-0.94

-30.8%

Minimum Variance

-0.47

-14.5%

CPO

-0.33

-13.7%


CPO performed similarly to the Markowitz method in the bull market, but remarkably, it was able to switch to defensive positions and has beaten the Markowitz method in the bear market of 2022. It improves the Sharpe ratio over the Markowitz portfolio by more than 60% in that bear market. That is the whole rationale of Conditional Portfolio Optimization - it adapts to the expected future external conditions (market regimes), instead of blindly optimizing on what happened in the past. 


Next, we tested the CPO methodology on a private investor’s tech portfolio, consisting of 7 US and 2 Canadian stocks, mostly in the tech sector. The constraints are that we cannot short any stock, and the weight w of each stock obeys w ∈ [0%, 25%],



Period

Method

Sharpe Ratio

CAGR

2017-01 to 2021-07

Equal Weights

1.36

31.1%

Risk Parity

1.33

24.2%

Markowitz

1.06

23.3%

Minimum Variance

1.10

19.3%

CPO

1.63

27.1%

2021-08 to 2022-07

Equal Weights

0.39

6.36%

Risk Parity

0.49

7.51%

Markowitz

0.40

6.37%

Minimum Variance

0.23

2.38%

CPO

0.70

11.0%


CPO performed better than both alternative methods under all market conditions. In particular, it improves the Sharpe ratio over the Markowitz portfolio by 75% in the bear market.


We also tested how CPO performs for some unconventional assets - a portfolio of 8 crypto currencies, again allowing for short positions and aiming to maximize its 7-day forward Sharpe ratio,



Method

Sharpe Ratio

Markowitz

0.26

CPO

1.00


(These results are over an out-of-sample period from January 2020 to June 2021, and the universe of cryptocurries for the portfolio are BTCUSDT, ETHUSDT, XRPUSDT, ADAUSDT, EOSUSDT, LTCUSDT, ETCUSDT, XLMUSDT). CPO improves the Sharpe ratio over the Markowitz method by a factor of 3.8.


Finally, to show that CPO doesn’t just work on portfolios of assets, we apply it to a portfolio of FX trading strategies traded live by a proprietary trading firm WSG. It is a portfolio of 7 trading strategies, and the allocation constraints are w ∈ [0%, 40%],



Method

Sharpe Ratio

Equal Weights

1.44

Markowitz

2.22

CPO

2.65


(These results are over an out-of-sample period from January 2020 to July 2022). CPO improves the Sharpe ratio over the Markowitz method by 19%.


In all 5 cases, CPO was able to outperform the naive Equal Weights portfolio and the Markowitz portfolio during a downturn in the market, while generating similar performance during the bull market.


For clients of our CPO technology, we can add specific constraints to the desired optimal portfolio, such as average ESG rating, maximum exposure to various sectors, or maximum turnover during portfolio rebalancing. The only input we require from them is the historical returns of the portfolio components (unless these components are publicly traded assets, in which case clients only need to tell us their tickers). Predictnow.ai will provide pre-engineered market features that capture market regime information. If the client has proprietary market features that may help predict the returns of their portfolio, they can merge those with ours as well. Clients’ features can remain anonymized. We will be providing an API for clients who wish to experiment with various constraints and their effects on the optimal portfolio.


If you’d like to learn more, please join us for our Conditional Portfolio Optimization webinar on Thursday, October 22, 2022, at 12:00 pm New York time. Please register here.


In the meantime, if you have any questions, please email us at info@predictnow.ai.