Journal of Internet Banking and Commerce
Journal of Internet Banking and Commerce
Journal of Internet Banking and Commerce, April 2018, vol. 23, no. 1
Abstract
The paper studies whether machine learning or technical analysis best predicts the
stock market and in turn generates the best return. The research back tests machine
learning and technical analysis methods ten years in the past to predict ten years in
the future. After prediction stage, the research incorporates the main findings into
trading strategies to beat the S&P 500 index. To further this analysis, the paper
examines all market periods and then examines the results specifically in up market
and down-market periods. The sampling period is January 1995 through December
2005, and the trading period is January 2006 through December 2016. The null
hypothesis is that machine learning and technical analysis would generate returns
with no statistically significant difference. The study uses State Street’s SPDR® SPY
ETF as the benchmark. Data is retrieved from Bloomberg and Yahoo Finance.
Outputs are calculated in R, MATLAB, SPSS, EVIEWS, Python, and SAS
languages.
© Macchiarulo A, 2018
INTRODUCTION
Machine Learning
The inspiration for the machine learning portion of the research stems from the paper
“Stock Price Prediction uses Neural Network with Hybridized Market Indicators” by
Ayodele, et al. [1] Sunday published in the Journal of Computing. This paper focuses
on predicting the stock market with machine learning techniques such as neural
networks, support vector machines, and various other projects.
Support Vector Machines increase the dimension of samples until it can linearly
separate classes into a test set. Support Vector Machines use a mathematical
formula known as the kernel function. The kernel function transforms the data so that
there is a greater possibility of separable classes. When the machine has reached a
state where it can linearly separate the classes, it attempts to find the optimal
separation. When the machine has built its model, it can start to predict on new data
by performing the same kernel transformation on the new data and decide what
class it should belong to. The support vector machine creates a decision boundary
where most points fall on either side of the boundary. The line in the support vector
machine is known as the optimal hyper plane. A line is bad if it passes too close to
the points because it will be too noise sensitive and it will not generalize correctly.
Thus, the line passing as far as possible from all points is optimal. The standard
formula for a hyper plane is f(x)=β0 + βTx. β0 is referred to as the bias while βTx is
the weight vector. The support vector uses Lagrange multipliers to obtain the weight
and bias vector for the optimal hyper plane. Lagrange multiplier strategy attempts to
find the local maximum and minimums of a function to equal constraints. The best
JIBC April 2018, Vol. 23, No.1 -3-
implication for a support vector machine is to predict the direction of the stock
market, that being either positive for negative in different market types such as a
bear or bull market. The Figure 1, details linear separation with the kernel function.
Neural Network
Neural networks take advantage of the way a biological brain solves problems with
large clusters of biological neurons connected by axons in neither a way that a
standard computer program cannot process nor a human process as efficiently.
Neural Networks use a process called feed-forward backpropagation. The algorithm
takes input variables and tries to predict the target variable. Neural Networks self-
adjust input weights by testing millions of possibilities to optimize the target value to
what is wanted by the user of the algorithm, whether it is a specified value, a
prediction, or a maximization type of optimization problem. In our research, we will
try to predict the stock market with the input variables. Trained data refers to the
combination of input and target data. Neural network machines produce an R^2 of
0.99 if input and target data is consistent. An example of neural network is given
below with three inputs, two hidden layers, and one target value (Figure 2).
Ensemble Learning
Noise
Noise is created from uncertainty and large impact events that can skew the
machine learning process. The process of Cross validation is used to eliminate this
from the model. Machine Learners attempt to build a model so that for a set of
inputs, it can provide the wanted output. When the model emphasizes having low
error too much, the model creates a decision boundary that is overly complicated
and includes the noise. When the model allows for too great of an error, it is not able
to properly divide the classes. To avoid the problems of over and under fitting; cross
validation is used. Cross validation is a model evaluation method. Cross validation
removes some of the data before training begins. When the training is done, the data
that was removed is used to test the performance of the fitted model with unseen
data.
Technical Analysis
The inspiration for the technical analysis portion of the research stems from the
paper “Forecasting the NYSE composite index with technical analysis, pattern
recognizer, neural network, and genetic algorithm: a case study in romantic decision
support” by Leigh, et al. [2] published in the Journal of Finance. This paper focuses
on predicting the stock market with technical analysis indicators as compared to
neural network techniques of predicting the stock market.
As described in the paper, using technical analysis accepts a semi-strong form of the
efficient markets hypothesis (“EMH”), which means that publicly available information
about the stock should be factored into the stock price, and ignoring the weak form
of EMH, which states that only past trading history has been built into the price. The
paper examines the validity of the weak form of the EMH. In their comparison, they
used a random-selection trading strategy to showcase the optimal weak EMH
method. In their analysis, they took a series of price and volume patterns in different
methods. They proved that the weak form EMH is not efficient in the face of
momentum in stock prices. However, their most promising results were in the form of
JIBC April 2018, Vol. 23, No.1 -5-
neural networks which are incorporated into the machine learning [3-6].
Machine Learning
The first step in the machine learning process to examine historical data that will be
tested and define the sample and testing period. The sampling period is January
1995 through December 2005, and the trading period is January 2006 through
December 2016. The next step in the Machine Learning process is to collect the data
that will be used to predict the future of the stock market. In a machine, there is a set
of data that contains both input data and target data, target data is the answer which
the algorithm should produce from the input. These two sets of data combined are
usually referred to as the training data. The training data is given below. By using
previous data the machine should be able to predict the next years with precision
(Table 1) [7-12].
The next study that must be performed is the Support Vector Machine. We will be
JIBC April 2018, Vol. 23, No.1 -6-
using the support vector machine to predict the market in both bull and bear trends.
Using the input and target data we can fit the new model. The support vector
machine asks for the number of data points and the number of dimensions. For the
study, we will produce a set of positive and negative examples from two Gaussians.
It is important to load standardized data such as sigma, the mean position, mean
position for negative or bearish examples, and the mean position for bullish
examples. Next the data must be trained. For the study, we split 80% into a training
set and 20% into a test set. Using the kernel function, we predict the data points in
the test set.
The dotted lines are the decision boundaries between positive and negative
examples. The support vector is the black line. The triangle points above are the
bullish scenario while the circle points below are the bearish scenario. The next step
is to cross validate the training set to improve the quality of the machine and
eliminate any noise. The k-fold and cross validation approaches are used by
randomly splitting the number of samples into folds. Data is loaded into R. The
Figure 3 is the linear support vector machine output.
The linear support vector machine does not give all the information we need in
predicting stock market direction. Just because we linearly separated positive or
bullish and negative or bearish input parameters does not mean they are separable
in real life. For example, if an economic rate falls that is considered a negative
Gaussian but maybe the downward shift was a good sign for the economy. In the
example of unemployment, if the unemployment rate decreases then that is good for
the economy and is not accurately represented in the linear support vector machine.
The nonlinear support vector machine tackles these problems in a more efficient
manner. To transform the current machine into a nonlinear one we set the kernel
parameter and a constant variable to one. Data is loaded into R, after running the
nonlinear support vector machine, the results are shown in Figure 4.
JIBC April 2018, Vol. 23, No.1 -7-
The linear and non-linear support vector machines tell the same conclusion in two
different ways. For the linear support vector machine, there is more triangle or bullish
points on the spectrum compared to bearish scenario. For the non-linear support
vector machine, the bullish points are dispersed across the red heat map in much
more quantities than the blue heat map. The darker red the heat map on the
spectrum the more significance each point is making to the machine. In sum, this
prediction dictates that there will be more bull trends than bear trends, which will
make the stock market upward sloping and have a positive return for the trading
period.
Neural Network
The next step is to fit the inputs and target into the neural network. The network
developed will contain nine input variables with ten hidden layers. The target value or
output in the neural network is the stock price in one year or the one-year return
prediction for State Street’s SPDR® SPY ETF (“SPY”). Data is loaded into MATLAB
(Figure 5).
Developing a neural network with external economic factors as inputs and the SPY
stock price as output through feed-forward back propagation we assigned optimal
JIBC April 2018, Vol. 23, No.1 -8-
weights to the individual SPY data and the external economic factors to not only
predict the stock price in one year but also show the allocation of factors that lead to
the prediction.
To remain consistent nine input and target values are distributed daily. 70% of the
neural network is trained, 15% validated, and 15% tested. After training, cross
validating, and testing the data the network runs and produces R^2 for each piece of
the network. The R^2 for training, cross validation and testing is 0.99. The R^2 for
the model is 0.97. This means that the neural network was performed correctly can
be accepted with large confidence. The error histogram shows that the errors are
normally distributed around the mean. Running the same simulation in R gives the
same results. Using two independent packages increases the reliability of the study
being conducted. Below are the results (Figure 6 and Table 2) [13-22].
The neural network predicts the stock market at very high precision. The neural
network in both studies yielded a ten-year return of 117.16% on the close of trading
JIBC April 2018, Vol. 23, No.1 -9-
period. The neural network is only 1.04% below the actual return of 118.2%. That is
very high predictability power. It is very interesting that the close price and volume of
the SPY are the largest weights used by the network in determining the one year
stock price. The external environmental factors play a much smaller role in the
prediction determined by the network.
The next step is to develop the algorithm to trade based on the data. The support
vector machine predicted the stock market to be upward sloping during the trading
period and have a positive return. The support vector machine concludes this by
dictating the number of bull and bear trends in the sample. With the support vector
knowledge in mind running the neural network on the data predicted the stock
market at a 1.04% margin of error. This is extremely high precision. In sum, the
machine learning process has predicted that there will be more bull days than bear
days and almost perfectly predicted the stock market. This type of knowledge is very
powerful and useful to profit in finance.
When doing prediction, the close price and volume of the SPY are the largest
weights used by the network in determining the one-year stock price. The external
environmental factors play a much smaller role in the prediction determined by the
network. Due to this discovery, the algorithm trades heavily based on lagged close
prices and trading volume to maximize returns on the stock market. The algorithm
trades by only rebalancing stocks in the S&P500 that are “winners” the day before
that is a stock that ended positively the day before to incorporate the Support Vector
Machine into the trades. Additionally, the rotation system does not execute
rebalancing trades without there being larger volume compared to the stock’s
average daily trading volume the day before. The results beat the S&P500 index as
seen below. Additionally, we run a neural network in R for every previous period and
if there was a larger weight given to closing price over trading volume we tweak the
algorithm to check for close prices over trading volume 60% of the time as opposed
to a 50/50 split. The vise-versa is true when trading volume was higher where we
would trade on volume 60% of the time over close prices. The trading results are
shown below. The algorithm is shown below before tweaking weights due to neural
network parameter [23-30].
def initialize(context):
# constants
context.volu
me=0.5
context.clos
e=0.5
context.closed=data.history(sid(8554),
'price', 1, '1d')
JIBC April 2018, Vol. 23, No.1 - 10 -
context.vol=data.history(sid(8554),
'volume', 1, '1d')
# ETF traded with weight
if context.vol > context.vol -1 and context.closed >
context.closed -1 then context.etfs={
symbol('SPY'): 1.0, # State Street’s SPDR® SPY ETF
}
end if
# Set commision
set_commission(commission.PerShare(cost=4.95, min_trade_cost=0.0))
# Rebalance portfolio
schedule_function(rebalance,date_rules.every_day(),
time_rules.market_open(minutes=35))
def rebalance (context, data):
for stock, weight in context.etfs.items():
order_target_percent(stock, weight*context.volume + weight*context.close)
The total return for the period is 204% as opposed to the S&P500 returns of 118.2%.
The strategy beats the market on the long term as well. 69 times the machine
learning strategy beats the market on a month to month basis out of 132 months.
52.27% of the time the strategy beats the markets monthly returns. The max
drawdown of the strategy comes out to 46.9% during the recession. It is apparent the
strategy does much better in a bullish market compared to a bearish market.
Running the strategy over ten years only produces a Beta of 0.72, which is less risky
than investing in the market. Additionally, the Sharpe ratio is 0.51 and a Sortino
negatively skewed at 0.71, and a volatility or standard deviation of 0.28. During the
recession, the month with the highest beta was 2.598 during April 2007. This is
expected and is much less risky than the market was during the time. In sum, the
machine learning algorithm that learns based on the previous year and adjusts the
strategy on percentage of buy and short based on trading volume and close prices
beats the market by 85.8% over ten years with slightly higher volatility than the
market. The strategy is more volatile 116 months out of the 131 months or 88.54% of
the time the standard deviation of the strategy is higher than the market. For the
higher volatility, the strategy to beat the market by almost doubles [31-34].
Technical Analysis
For each method, there were 120 total observations over the total sample period
from January 2007 to December 2016. Machine learning had the highest overall
average monthly return at 1.19%. During this same time-period, the S&P 500 had an
average monthly return of .48%. The monthly average returns for the technical
indicators ranged from .83% to -1.21%. The full listing of the average monthly returns
listed in percent form is shown in Table 3.
JIBC April 2018, Vol. 23, No.1 - 11 -
After gathering the sample period data, we separated out the observations into those
that occurred in an up market from those in a down market. This was done by
looking at the returns of the S&P 500. For months when it was positive, the returns
for that month were classified as up market and when it was negative; the returns
were classified as down market. The up-market period had a total of 72 observed
months. During this time, the S&P 500 had an average monthly return of 3.22%.
Machine learning had 4.13% monthly average return, approximately 1% above the
next highest method. As seen in Table 3, the technical indicators ranged from 2.99%
to -1.01% (Table 4).
Minimum
N Maximum Mean Std.
Deviation
Machine Learning 72 -8.2 23.5 4.1289 5.96786
For the down market, as seen below in Table 5, we only had a total of 48
observations. During the time, the S&P 500 had an average monthly return of -
3.63%. Machine learning did not perform as well as in the whole sample and up
market periods and had -3.21% for its monthly average return. However, the
technical indicators were more varied ranging between 1.82% to -3.47% (Table 5).
Std.
N Minimum Maximum Mean
Deviation
KBand 48 -13.129 13.31265 1.821142 5.092432
William's 48 -9.02627 12.64072 0.969127 4.028968
%R
Stochastic 48 -9.02627 8.703512 0.797199 3.399316
s
JIBC April 2018, Vol. 23, No.1 - 14 -
Cmdty
Channel 48 -16.5331 13.06419 0.758744 4.575824
Index
MA
48 -5.4208 13.94343 0.645228 3.095968
Envelopes
MACD 48 -9.07901 8.290536 0.576946 3.196419
Bollinger
Bands 48 -13.129 15.85366 0.438836 4.352406
Trading
Envelopes
MA 48 -13.129 15.85366 0.438836 4.352406
Oscillator 48 -9.04289 10.24691 0.341037 4.330465
RSI 48 -4.61304 4.268927 0.222634 1.484401
Ichimoku 48 -5.92016 7.675862 0.022111 2.726831
Triangular 48 -8.58764 6.438574 -0.48563 3.062873
MA
DMI 48 -14.0814 8.636103 -0.49819 3.365372
Expoential
48 -8.53862 6.070957 -0.65036 2.504656
MA
Weighted
48 -9.08872 6.827461 -0.71012 3.127204
MA
Variable
48 -8.83962 5.930361 -0.74354 3.429761
MA
Simple MA 48 -9.56601 5.930361 -0.77227 3.451656
Fear and
Greed 48 -13.9833 7.076658 -0.83306 4.112265
Accum/Dis
trib Osc 48 -21.6401 16.67208 -1.20066 7.254137
Rate of
48 -18.1407 14.31845 -1.51186 6.01379
Change
Rex
48 -18.6651 10.76828 -2.09211 5.189241
Oscillator
Machine 48 -20.4 14.19 -3.211 7.06144
Learning
Buy and 48 -16.5331 2.241661 -3.47417 3.678578
Hold
JIBC April 2018, Vol. 23, No.1 - 15 -
Fundamen
tal 48 -18.46 -0.1 -3.7363 3.64085
Analysis
Valid N 48
(listwise)
RESULTS
To test for statistical significance for the machine learning results compared to those
of the technical analysis, we used paired samples t-tests. The results, as seen below
in Table 6, are ordered from the highest average monthly return to the lowest for
each of the technical indicators, compared to the machine learning results which had
the highest mean. At a 95% confidence level, machine learning outperformed the
following technical indicators: fear and greed, simple MA, weighted MA, variable MA,
parabolic, accum/distrib osc, Rex Oscillator, and rate of change. For the up-market
period, machine learning had outperformed technical analysis results by a relatively
large margin. As seen in Table 7 below, the results for the up-market period were
better than those from the total 120 observations. At the 99% confidence level,
machine learning outperformed compared to all but the buy and hold technical
analysis method. Those two it outperformed with marginal significance at the 80%
level. Compared to the results from the whole sample, this indicates that machine
learning will be more likely to outperform in an up-market period.
Std. Sig
Std. Error Lower Upper (two
Pair Strategy Mean Dev. Mean (95%) (95%) T Df tailed)
Bollinger Bands –
Pair 1 Machine Learning -0.36 8.2 0.74 -1.84 1.21 -0.48 119 0.63
Trading Envelopes – -
Pair 2 Machine Learning -0.36 8.2 0.74 -1.84 1.12 0.553 119 0.63
KBand – Machine -
Pair 3 Learning -0.42 8.4 0.77 -1.96 1.104 0.873 119 0.581
Cmdty Channel
Index – Machine -
Pair 4 Learning -0.654 8.2 0.74 -2.139 0.733 0.967 119 0.384
Stochastics –
Pair 5 Machine Learning -0.7 7.9 0.72 -2.31 0.751 -1.01 119 0.335
Williams %R –
Pair 6 Machine Learning -0.78 8.49 0.77 -1.95 0.37 -1.34 119 0.314
Buy and Hold – -
Pair 7 Machine Learning 0.8099 6.4 0.58 -1.99 0.29 -1.46 119 0.181
JIBC April 2018, Vol. 23, No.1 - 16 -
Fundamental
Analysis – Machine -
Pair 8 Learning 0.9033 6.09 0.55 -2.139 0.53 -1.24 119 0.148
MA Envelopes –
Pair 9 Machine Learning -0.915 7.94 0.77 -2.139 0.53 -1.3 119 0.215
RSI – Machine
Pair 10 Learning -0.93 7.68 0.74 -2.139 0.47 -1.19 119 0.194
MACD – Machine -
Pair 11 Learning -1.14 8.68 0.74 -2.139 0.62 1.776 119 0.236
Ichimoku – Machine -
Pair 12 Learning -1.32 7.54 0.68 -3.009 0.22 2.077 119 0.1
Triangular MA –
Pair 13 Machine Learning -1.38 8.13 0.72 -2.78 0.15 -2.32 119 0.078
DMI – Machine
Pair 14 Learning -1.41 8.12 0.74 -3.21 0.081 -2.3 119 0.064
Exponential MA –
Pair 15 Machine Learning -1.42 7.9 0.69 -3.41 0.01 -2.41 119 0.052
MA Oscillator –
Pair 16 Machine Learning -1.42 8.77 0.741 -3.8 0.16 -1.46 119 0.078
Fear and Greed –
Pair 17 Machine Learning -1.73 8.1 0.74 -3.667 -0.066 -1.24 119 0.04
Simple MA –
Pair 18 Machine Learning -1.74 8.3 0.74 -3.891 -0.25 -1.3 119 0.022
Weighted MA –
Pair 19 Machine Learning -1.14 8.4 0.722 -3.009 -0.24 -2.32 119 0.023
Variable MA –
Pair 20 Machine Learning -1.87 8.03 0.743 -2.78 -0.33 -2.3 119 0.017
Parabolic– Machine
Pair 21 Learning -1.88 8.241 0.74 -3.21 -0.36 -2.41 119 0.016
Accum/Distrib Osc. –
Pair 22 Machine Learning -2.02 8.805 0.74 -3.41 -0.61 -2.79 119 0.007
Rex Oscillator –
Pair 23 Machine Learning -2.21 8.031 0.74 -1.99 -0.76 -3.02 119 0.003
Rate of Change – -
Pair 24 Machine Learning -2.4 8.24 0.734 -3.891 0.9098 -3.11 119 0.002
The results for the down-market period showcased the weakness of machine
learning. Although it performed above many technical indicators in the positive return
period, it underperformed in the down-market period. Over the 48 observed months
with a negative S&P 500 return, machine learning was close to being the lowest
average monthly returns (Table 7).
JIBC April 2018, Vol. 23, No.1 - 17 -
Std. Sig
Std. Error Lower Upper (two
Pair Strategy Mean Dev. Mean (95%) (95%) T Df tailed)
Bollinger Bands – -
Pair 1 Machine Learning 0.9997 5.675 0.66 -2.33 0.333 -1.4 71 0.139
Trading Envelopes
Pair 2 – Machine Learning -3.035 6.78 0.79 -4.62 -1.44 -3.7 71 0.118
KBand – Machine
Pair 3 Learning -3.034 6.857 0.79 -4.6 -1.44 -3.7 71 0
Cmdty Channel
Index – Machine
Pair 4 Learning -3.73 6.42 0.802 -5.33 -2.13 -3.7 71 0
Stochastics –
Pair 5 Machine Learning -3.81 5.93 0.76 -6.72 -2.05 -3.7 71 0
Williams %R –
Pair 6 Machine Learning -3.82 6.87 0.808 -6.12 -2.04 -4.6 71 0
Buy and Hold –
Pair 7 Machine Learning -3.96 6.69 0.74 -4.62 -2.66 -4.7 71 0
Fundamental
Analysis – Machine
Pair 8 Learning -4.01 5.93 0.75 -4.6 -2.45 -5.3 71 0
MA Envelopes –
Pair 9 Machine Learning -4.07 5.93 0.69 -5.33 -2.05 -5.8 71 0
RSI – Machine
Pair 10 Learning -4.09 7.17 0.809 -4.6 -2.04 -5 71 0
MACD – Machine
Pair 11 Learning -4.09 6.77 0.76 -6.72 -2.66 -5.1 71 0
Ichimoku – Machine
Pair 12 Learning -3.96 6.96 0.78 -6.12 -2.66 -5.5 71 0
Triangular MA –
Pair 13 Machine Learning -4.01 6.42 0.74 -6.11 -2.66 -5.6 71 0
DMI – Machine
Pair 14 Learning -4.07 5.93 0.801 -6.43 -2.45 -5.7 71 0
Exponential MA –
Pair 15 Machine Learning -4.07 6.87 0.74 -6.47 -2.45 -5.5 71 0
MA Oscillator –
Pair 16 Machine Learning -4.09 6.42 0.78 -6.72 -2.04 -5.6 71 0
Fear and Greed –
Pair 17 Machine Learning -4.09 6.42 0.74 -6.9 -2.66 -5.7 71 0
Simple MA –
Pair 18 Machine Learning -4.07 5.93 0.801 -6.11 -2.45 -5.5 71 0
JIBC April 2018, Vol. 23, No.1 - 18 -
Weighted MA –
Pair 19 Machine Learning -4.07 6.87 0.74 -6.43 -2.04 -5.6 71 0
Variable MA –
Pair 20 Machine Learning -4.09 6.78 0.78 -6.47 -2.66 -5.7 71 0
Parabolic– Machine
Pair 21 Learning -4.09 6.857 0.74 -6.72 -2.04 -5.7 71 0
Accum/Distrib Osc.
Pair 22 – Machine Learning -3.81 6.42 0.801 -6.47 -2.66 -5.5 71 0
Rex Oscillator – -
Pair 23 Machine Learning -5.022 7.26 0.855 6.729 -3.31 -5.8 71 0
Rate of Change –
Pair 24 Machine Learning -5.13 7.51 0.8855 -6.9 -3.36 -5.7 71 0
Std. Sig
Std. Error Lower Upper (two
Pair Strategy Mean Dev. Mean (95%) (95%) T Df tailed)
Bollinger Bands –
Pair 1 Machine Learning 5.03 7.74 1.118 3.15 6.9 4.5 47 0
Trading
Envelopes –
Pair 2 Machine Learning 4.1 8.45 1.22 2.13 6.22 3.4 47 0.001
KBand – Machine
Pair 3 Learning 3.99 7.11 1.02 2.27 5.7 3.8 47 0
Cmdty Channel
Index – Machine
Pair 4 Learning 3.96 8.921 1.28 1.62 5.94 3.48 47 0.001
Stochastics –
Pair 5 Machine Learning 3.64 7.74 1.118 3.15 6.9 3.22 47 0.001
Williams %R –
Pair 6 Machine Learning 3.78 7.11 1.02 2.27 5.7 3.8 47 0.005
Buy and Hold –
Pair 7 Machine Learning 2.37 8.921 1.28 1.62 5.94 3.48 47 0.005
Fundamental
Pair 8 Analysis – 2.01 8.57 1.237 1.57 5.72 2.95 47 0.005
JIBC April 2018, Vol. 23, No.1 - 19 -
Machine Learning
MA Envelopes –
Pair 9 Machine Learning 2.69 7.74 1.118 3.15 6.9 2.95 47 0.005
RSI – Machine
Pair 10 Learning 3.99 7.11 1.02 2.27 5.7 3.8 47 0.005
MACD – Machine
Pair 11 Learning 3.96 8.921 1.28 1.62 5.94 3.48 47 0.002
Ichimoku –
Pair 12 Machine Learning 3.64 7.74 1.118 3.15 6.9 3.45 47 0.037
Triangular MA –
Pair 13 Machine Learning 2.21 8.08 1.02 1.57 4.787 47 0.347
DMI – Machine
Pair 14 Learning 2.23 8.24 1.28 1.66 4.85 3.22 47 0.231
Exponential MA –
Pair 15 Machine Learning 2.27 8.4 1.237 0.58 4.873 3.8 47 0.783
MA Oscillator –
Pair 16 Machine Learning 3.99 8.37 1.118 0.46 4.53 3.48 47 0.581
Fear and Greed –
Pair 17 Machine Learning 3.96 8.412 1.213 0.401 4.49 2.95 47 0.005
Simple MA –
Pair 18 Machine Learning 3.64 7.68 1.02 0.517 4.47 1.499 47 0.005
Weighted MA –
Pair 19 Machine Learning 3.99 9.29 1.28 -0.23 4.23 1.541 47 0.006
Variable MA –
Pair 20 Machine Learning 3.96 7.63 1.237 -0.85 4.26 0.95 47 0.47
Parabolic– -
Pair 21 Machine Learning 3.64 9.47 1.118 -1.19 3.95 0.264 47 0.783
Accum/Distrib
Osc. – Machine
Pair 22 Learning 1.1 8.15 1.12 -0.85 3.09 -2.64 47 0.581
Rex Oscillator – -
Pair 23 Machine Learning -0.26 6.89 0.99 -1.932 1.406 0.261 47 0.006
Rate of Change – - -
Pair 24 Machine Learning 0.5252 6.7 0.97 -2.15 1.1 0.541 47 0.47
CONCLUSION
In conclusion, after analyzing the results, we conclude that using machine learning
as a trading strategy can positively impact the returns generated compared to using
many technical indicators. We found that there was no statistically significant
difference between using machine learning and using technical analysis. In up
market periods, machine learning will outperform technical analysis. However, if the
market is a down market it is more beneficial to use technical analysis. Machine
JIBC April 2018, Vol. 23, No.1 - 20 -
REFERENCES
34. Khoshgoftaar TM, Dittman DJ, Wald R, Fazelpour A (2012) First order
statistics based feature selection: A diverse and powerful family of feature
selection techniques. Proceedings of the Eleventh International Conference
on Machine Learning and Applications (ICMLA): Health Informatics
Workshop, pp: 151-157.