<< . .

. 21
( : 30)

. . >>

the squared amplitude or length of the resultant vector. The final block of code in
this function normalizes the vector of price differences to unit length.
The general code that implements the model follows our standard practice.
After a block of declarations, a number of parameters are copied to local variables for
convenient reference. The 50-bar average true range, which is used for the standard
exit, and the time-reversed lo-bar Slow %K, used as a target, are then computed.
One of the parameters (mode) sets the mode in which the code will run. A
mode of 1 runs the code to prepare a fact file: The file is opened, the header (con-
sisting of the number of inputs, 18, and the number of targets, 1) is written, and the
fact count is initialized to zero. This process only occurs for the first market in the
portfolio. The tile remains open during all further processing until it is closed after
the last tradable in the portfolio has been processed. After the header, facts are writ-
ten to the tile. All data before the in-sample date and after the out-of-sample date
are ignored. Only the in-sample data are used. Each fact written to the file consists
of a fact number, the 18 input variables (obtained using PrepareNeuralInpufs), and
the target (which is the time-reversed Slow %K). Progress inforntation is displayed
for the user as the fact file is prepared.
If mode is set to 2, a neural network that has been trained using the fact tile
discussed above is used to generate entries into trades, The first block of code
opens and loads the desired neural network before beginning to process the first
commodity. Then the standard loop begins. It steps through bars to simulate actu-
al trading. After executing the usual code to update the simulator, calculate the
number of contracts to trade, avoid limit-locked days, etc., the block of code is
reached that generates the entry signals, stop prices, and limit prices. The
PrepareNeuralInpufs function is called to generate the inputs corresponding to the
current bar, these inputs are fed to the net, the network is told to run itself, the out-
put from the net is retrieved, and the entry signal is generated.
The rules used to generate the entry signal are as follows. If the output from the
network is greater than a threshold (thresh), a sell signal is issued; the net is predicting
a high value for the time-reversed Slow %K, meaning that the current closing price
might be near the high of its near-future range. If the output from the network (the pre-
diction of the time-reversed Slow %K) is below 100 minus thresh, a buy signal is
issued. As an example, if thresh were set to 80, any predicted,time-reversed Slow %K
greater than 80 would result in the posting of a sell signal, and any predicted time-
reversed Slow %K less than 20 would result in the issuing of a buy signal.
Finally, there are the two blocks of code used to issue the actual entry orders
and to implement the standardized exits. These blocks of code are identical to
those that have appeared and been discussed in previous chapters.

Test Methodology for the Reverse Slow %K Model
The model is executed with the mode parameter set to 1 to generate a fact set. The
fact set is loaded into N-TRAIN, a neural network development kit (Scientific
Consultant Services, 516-696-3333). appropriately scaled for neural processing,
and shuffled, as required when developing a neural network. A series of networks
are then trained, beginning with a small network and working up to a fairly large
network. Most of the networks are simple, 3-layer nets. Two 4.layer networks are
also trained. All nets are trained to maximum convergence and then “polished” to
remove any small biases or offsets. The process of polishing is achieved by reduc-
ing the learning rate to a very small number and then continuing to train the net for
about 50 runs.
Table 1 l-l contains information regarding all networks that were trained for
this model, along with the associated correlations and other statistics. In the table,
Nei Name = the file name to which the net was saved; Net Size = the number of
layers and the number of neurons in each layer; Connecrions = the number of con-
nections in the net optimized by the training process (similar to the number of
regression coeff%zients in a multiple regression in terms of their impact on curve-
fitting and shrinkage); and Correlarion = the multiple correlation of the network
output with the target (this is not a squared multiple correlation but an actual cor-
relation). Corrected for Shrinkage covers two columns: The left one represents the
correlation corrected for shrinkage under the assumption of an effective sample
size of 40,000 data points or facts in the training set. The right column represents
the correlation corrected for shrinkage under the assumption of 13,000 data points
or facts in the training set. The last line of the table contains the number of facts
or data points (Actual N) and the number of data points assumed for each of the
shrinkage corrections (Assumed).
The number of data points specified to the shrinkage adjustment equation is
smaller than the actual number of facts or data points in the training set. The reason

Training Statistics for Neural Nets to Predict Time-Reversed Slow %K

NN3.NET 18-8-l 152
NN4.NET 18-10-1 190
NNS.NET 18-12-l 228 p14r
NNB.NET Ibl44l 312

is the presence of redundancy between facts. Specifically, a fact derived from one
bar is likely to be fairly similar to a fact derived from an immediately adjacent bar.
Because of the similarity, the “effective” number of data points, in terms of con-
tributing statistically independent information, will be smaller than the actual num-
ber of data points. The two corrected correlation columns represent adjustments
assuming two differently reduced numbers of facts. The process of correcting corre-
lations is analogous to that of correcting probabilities for multiple tests in optimiza-
tion: As a parameter is stepped through a number of values, results are likely to be
similar for nearby parameter values, meaning the effective number of tests is sotne-
what less tbau the actual number of tests.

Training Results for Time-Reversed Slow %K Model
As evident from Table 1 l-l, the raw correlations rose monotonically with the size
of the network in terms of numbers of connections. When adjusted for shrinkage,
by assuming an effective sample size of 13,000, the picture changed dramatically:
The nets that stood out were the small Mayer net with 6 middle layer neurons, and
the smaller of the two 4-layer networks. With the more moderate shrinkage cor-
rection, the two large 4-layer networks had the highest estimated predictive abili-
ty, as indicated by the multiple correlation of their outputs with the target.
On the basis of the more conservative statistics (those assuming a smaller
effective sample size and, hence, more shrinkage due to curve-fitting) in Table 1 l-
1, two neural nets were selected for use in the entry model: the 18-6-l network
(nn2.nei) and the 18-14-4-l network (nn&rer). These were considered the best
bets for nets that might hold up out-of-sample. For the test of the entry model
using these nets, the model implementation was run with mode set to 2. As usual,
all order types (at open, on limit, on stop) were tested.

For these models, two additional fact sets are needed. Except for their targets,
these fact sets are identical to the one constructed for the time-reversed Slow %K.
The target for the first fact set is a 1, indicating a bottom turning point, if tomor-
row™s open is lower than the 3 preceding bars and 10 succeeding bars. If not, this
target is set to 0. The target for the second fact set is a 1, indicating a top, if tomor-
row™s open has a price higher than the preceding 3 and succeeding 10 opens.
Otherwise this target is set to 0. Assuming there are consistent patterns in the mat-
ket, the networks should be able to learn them and, therefore, predict whether
tomorrow™s open is going to be a top, a bottom, or neither.
Unlike the fact set for the time-reversed Slow %K model, the facts in the sets
for these models are generated only if tomorrow™s open could possibly be a turn
ing point. If, for example, tomorrow™s open is higher than today™s open, then
tomorrow™s open cannot be considered a turning point, as defined earlier, no mat-
ter what happens thereafter. Why ask the network to make a prediction when there
is no uncertainty or need? Only in cases where there is an uncertainty about
whether tomorrow™s open is going to be a turning point is it worth asking the net-
work to make a forecast. Therefore, facts are only generated for such cases.
The processing of the inputs, the use of statistics, and all other aspects of the
test methodology for the turning-point models are identical to that for the time-
reversed Slow %K model. Essentially, both models are identical, and so is the
methodology; only the subjects of the predictions, and, consequently, the targets
on which the nets are trained, differ. Lastly, since the predictions are different, the
rules for generating entries based on the predictions are different between models.
The outputs of the trained networks represent the probabilities, ranging from
0 to 1, of whether a bottom, a top, or neither is present. The two sets of rules for
the two models for generating entries are as follows: For the tirst model, if the bot-
tom predictor output is greater than a threshold, buy. For the second model, if the
top predictor output is greater than a threshold, sell. For both models, the thresh-
old represents a level of confidence that the nets must have that there will be a bot-
tom or a top before an entry order is placed.
// write actual in-sample facts to the fact file
forccb = 1; Cb <= nix cb++l (
// ioo!&ack
if (fit L&l < ISDATE˜ continue;
if cdt Lcb+lOl > OOS˜DATE˜ break; // ignore 00s data
if(opnIch+l] >= Lowest(opn, 3 , Cb) I
// skip these fame
fprintf Cfil, “$6d”, ++factco”nt) i // fact number
PrepareNe”ralInputs(“ar, ClS, SD);
forck = 1; k c= 1s; kt+j
fprintf(fil, “%7.3f”, varGd1; /, Standard inputs
if˜opn˜cbill < Lowest Copn, 9. cb+lO) 1
netout I 1.0; else netout = 0.0; ,, calculate target
fprintfcfil,˜%6.lf\Il”. netout i ,, target
ifC(Cb % 500) == 1)
printf C = %d\n™ , cb) ;
vZ* // progress info
// generate entry signals. stop prices and limit prices
if˜opn˜cb+ll c LOWest˜opn, 3 , Cbl) ( // r u n only these
PrepareNe”ralmputs(var. cls, &I; I/ preprocess data
rmset-inputvbmet, &“arIll) ; ,, feed net inputs
ntlfire Cnnet) ; ,, run the net
neta”˜ = ntlget˜output˜nnet, 0); /, get mtput
netout *= 100.0; // scale to percent
Since the code for the bottom predictor model is almost identical to that of the time-
reversed Slow %K model, only the two blocks that contain changed code are pre-
sented above. In the first block of code, the time-reversed Slow %K is not used.
Instead, a series of ones or zeros is calculated that indicates the presence (1) or
absence (0) of bottoms (bottom target). When writing the facts, instead of writing the
time-reversed Slow %K, the bottom target is written. In the second block of code, the
roles for comparing the neural output with an appropriate threshold, and generating
the actual entry buy signals, are implemented. In both blocks, code is included to pre-
vent the writing of facts, or use of predictions, when tomorrow™s open could not pos-
sibly be a bottom. Similar code fragments for the top predictor model appear below.

,, lo&back
/I ignore 00s data

I/ skip these facts
,, fact number

,, calculate target
,, write target
Test Methodology for the Turning-Point Model
The test methodology for this model is identical to that used for the time-reversed
Slow %K model. The fact set is generated, loaded into N-TRAIN, scaled, and shuf-
fled. A series of nets (from 3. to 4-layer ones) are trained to maximum convergence
and then polished. Statistics such as shrinkage-corrected correlations are calculated.

Training Results for the Turning-Point Model
Bo&vn Forecaster. The structure of Table 1 l-2 is identical to that of Table 11-l. As
with the net trained to predict the time-reversed Slow %K, there was a monotonic
relationship between the number of connections in the network and the multiple cor-
relation of the network™s output with the target; i.e., larger nets evinced higher corre-
lations. The net was trained on a total of 23,900 facts, which is a smaller fact set than
that for the time-reversed Slow %K. The difference in number of facts resulted
because the only facts used were those that contained some uncertainty about whether
tomorrow™s open could be a turning point. Since the facts for the bottom forecaster
came from more widely spaced points in the time series, it was assumed that there
would be less redundancy among them. When corrected for shrinkage, effective sam-
ple sizes of 23,919 (equal to the actual number of facts) and 8,000 (a reduced effec-
tive fact count) were assumed. In terms of the more severely adjusted correlations, the
best net in this model appeared to be the largest 4-layer network; the smaller 4-layer
network was also very good. Other than these two nets, only the 3.layer network with
10 middle-layer neurons was a possible choice. For tests of trading performance, the
large 4-layer network (nn9.nef) and the much smaller 3-layer network (n&.net) were

Training Statistics for Neural Nets to Predict Bottom Turning Points

Net Name INet Size ]Connections lCorfelation lCorrected forShrinkage
! I I I !
NNI .NET 1841 76 0.109 0.084 0.050
NN2.NET 18&i 114 0.121 0.100 0.025
NNB NET 1 &&I O˜lA8._
_.. -..-- _._ O˜tlAQ
152 0˜122
. .-.. .- ,. .--. .--
NN4.NE.˜ T IlIL˜h4 I100 Ill ,Ra I ” 1411 rind
._ ._ . a__ -. ._” _. .-. -.---
NNS.NET 1 &I 2-1 228 0.167 0.137 -0.019
NN8.NET 1818-l 304 0.185 0.148 -0.080
I$ .--- -.-- -..-- _.__.
1 a-20-.1 !mo II225 0˜188 0˜057
NN7.NE-r 18-14-4-1 ii2 0.219 0.188 o.oQ8
18-200-l 488 0.294 0.200 0.188

23900 Assumed 23900 8000

TABLE 1 l-3

for Neural Nets to Predict Top Turning Points
Training Statistics
Net Name Net Size Connections Correlation Corrected forShrinkqje

NNI .NET 18-4-l 70 0.103 0.068 1 0.035
NN2 NET Ii1 0117 0 na71 -0 t-07

Actual N 125919 IAs-----™

Top Forecuster. Table 1 l-3 contains the statistics for the nets in this model; they
were trained on 25,919 facts. Again, the correlations were directly related in size to
the number of connections in the net, with a larger number of connections leading
to a better model fit. When mildly corrected for shrinkage, only the smaller 4-layer
network deviated from this relationship by having a higher correlation than would
be expected. When adjusted under the assumption of large amounts of curve-fitting
and shrinkage, only the two 4-layer networks stood out, with the largest one
(nn9.net) performing best. The only other high correlation obtained was for the 1%
10-l net (nn4.nef). To maximize the difference between the nets used in the trading
tests, the largest 4.layer net, which was the best shrinkage-corrected performer, and
the fairly small (18-10-l) net were chosen.

Table 11-4 provides data regarding whole portfolio performance with the best in-
sample parameters for each test in the optimization and verification samples. The
information is presented for each combination of order type, network, and model.
In the table, SAM™ = whether the test was on the training or verification sample
(Nor OUT); ROA% = the annualized return-on-account; ARRR = the annualized
risk-to-reward ratio; PROB = the associated probability or statistical significance;
TRDS = the number of trades taken across all commodities in the portfolio;
WIN% = the percentage of winning trades; $TRD = the average profit/loss per
trade; BARS = the average number of days a trade was held; NETL = the total net
profit on long trades, in thousands of dollars; NETS = the total net profit on short
trades, in thousands of dollars. Columns PI, P2, and P3 represent parameter val-
TABLE 11-4

Portfolio Performance with Best In-Sample Parameters for Each Test
in the Optimization and Verification Samples

Portfolio Performance with Best In-Sample Parameters for Each Test
in the Optimization and Verification Samples (Continued)

ues: PI = the threshold, P2 = the number of the neural network within the group
of networks trained for the model (these numbers correspond to the numbers used
in the file names for the networks shown in Tables 1 l-l through 1 l-3), P3 = not
used. In all cases, the threshold parameters (column Pl) shown are those that
resulted in the best in-sample performance. Identical parameters are used for ver-
ification on the out-of-sample data.
The threshold for the time-reversed Slow %K model was optimized for each
order type by stepping it from 50 to 90 in increments of 1. For the top and bottom
predictor models, the thresholds were stepped from 20 to 80 in increments of 2. In
each case, optimization was carried out only using the in-sample data. The best
parameters were then used to test the model on both the in-sample and out-of-sam-
ple data sets. This follows the usual practice established in this book.

Trading Results for the Reverse Slow %K Model
The two networks that were selected as most likely to hold up out-of-sample, based
on their shrinkage-adjusted multiple correlations with the target, were analyzed for
trading performance, The first network was the smaller of the two, having 3 layers
(18-6-l network). The second network had 4 layers (18-14-4-l network).

Results Using the 18-6-I Network. In-sample, as expected, the trading results
were superb. The average trade yielded a profit of greater than $6,000 across all
order types, and the system provided an exceptional annual return, ranging from
192.9% (entry at open, Test 1) to 134.6% (entry on stop, Test 3). Results this
good were obtained because a complex model containing 114 free parameters
was fitted to the data. Is there anything here beyond curve-fitting? Indeed there
is. With the stop order, out-of-sample performance was actually slightly prof-
itable-nothing very tradable, but at least not in negative territory: The average
trade pulled $362 from the market. Even though losses resulted out-of-sample
for the other two order types, the losses were rather small when compared with
those obtained from many of the tests in other chapters: With entry at open, the
system lost only $233 per trade. With entry on limit (Test 2), it lost $331. Again,
as has sometimes happened in other tests of countertrend models, a stop order,
rather than a limit order, performed best. The system was profitable out-of-sam-
ple across all orders when only the long trades were considered. It lost across all
orders on the short side.
In-sample performance was fabulous in almost every market in the portfolio,
with few exceptions, This was true across all order types. The weakest perfor-
mance was observed for Eurodollars, probably a result of the large number of con-
tracts (hence high transaction costs) that must be traded in this market. Weak
performance was also noted for Silver, Soybean Oil, T-Bonds, T-Bills, Canadian
Dollar, British Pound, Gold, and Cocoa. There must be something about these
markets that makes them difficult to trade, because, in-sample, most markets per-
form well. Many of these markets also performed poorly with other models.
Out-of-sample, good trading was obtained across all three orders for the T-
Bonds (which did not trade well in-sample), the Deutschemark, the Swiss Franc, the
Japanese Yen, Unleaded Gasoline, Gold (another market that did not trade well in-
sample), Palladium, and Coffee. Many other markets were profitable for two of the
three order types. The number of markets that could be traded profitably out-of-sam-
ple using neural networks is a bit surprising. When the stop order (overall, the best-
performing order) was considered, even the S&P 500 and NYFE yielded substantial
profits, as did Feeder Cattle, Live Cattle, Soybeans, Soybean Meal, and Oats.
Figure 1 l-l illustrates the equity growth for the time-reversed Slow %K pre-
dictor model with entry on a stop. The equity curve was steadily up in-sample, and
continued its upward movement throughout about half of the out-of-sample peri-
od, after which there was a mild decline.

Results of the 18-14-4-l Network. This network provided trading performance
that showed more improvement in-sample than out-of-sample. In-sample, returns
FIGURE 1 ˜I -1

Equity Growth for Reverse Slow %K 18-6-l Net, with Entry on a Stop

E 2s0m00
E- Out-oLsampb equity


ranged from a low of 328.9% annualized (stop order, Test 6) to 534.7% (entry at
open, Test 4). In all cases, there was greater than $6,000 profit per trade. As usual,
the longs were more profitable than the shorts. Out-of-sample, every order type
produced losses. However, as noted in the previous set of tests, the losses were
smaller than typical of losing systems observed in many of the other chapters: i.e.,
the losses were about $1,000 per trade, rather than $2,000. This network also took
many more trades than the previous one. The limit order performed best (Test 5).
The long side evidenced smaller losses than the short side, except in the case of
the stop order, where the short side had relatively small losses. The better in-sam-
ple performance and worsened out-of-sample performance are clear evidence of
curve-fitting. The larger network, with its 320 parameters, was able to capitalize
on the idiosyncrasies of the training data, thereby increasing its performance in-
sample and decreasing it out-of-sample.
In-sample, virtually every market was profitable across every order. There
were only three exceptions: Silver, the Canadian Dollar, and Cocoa. These mar-
kets seem hard to trade using any system. Out-of-sample, several markets were
profitable across all three order types: the Deutschemark, the Canadian Dollar,
Light Crude, Heating Oil, Palladium, Feeder Cattle, Live Cattle, and Lumber. A
few other markets traded well with at least one of the order types.
The equity curve showed perfectly increasing equity until the out-of-sample
period, at which point it mildly declined. This is typical of a curve resulting from
overoptimization. Given a sample size of 88,092 facts, this network may have
been too large.

Trading Results for the Bottom Turning-Point Model
The two networks that were selected, on the basis of their corrected multiple correla-
tions with the target, as most likely to hold up out-of-sample are analyzed for trading

<< . .

. 21
( : 30)

. . >>