How to Predict Stock Prices in Python using TensorFlow 2 and Keras

Predicting different stock prices using Long Short-Term Memory Recurrent Neural Network in Python using TensorFlow 2 and Keras.
Abdou Rockikz · 16 min read · Updated feb 2020 · Machine Learning


Predicting stock prices has always been an attractive topic to both investors and researchers. Investors always question if the price of a stock will rise or not, since there are many complicated financial indicators that only investors and people with good finance knowledge can understand, the trend of stock market is inconsistent and look very random to ordinary people.

Machine learning is a great opportunity for non-experts to be able to predict accurately and gain steady fortune and may help experts to get the most informative indicators and make better predictions.

The purpose of this tutorial is to build a neural network in TensorFlow 2 and Keras that predicts stock market prices. More specifically, we will build a Recurrent Neural Network with LSTM cells as it is the current state-of-the-art in time series forecasting.

Alright, let's get start. First, you need to install Tensorflow 2 and other libraries:

pip3 install tensorflow pandas numpy matplotlib yahoo_fin sklearn

More information on how you can install Tensorflow 2 here.

Once you have everything set up, open up a new Python file (or a notebook) and import the following libraries:

from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import LSTM, Dense, Dropout
from tensorflow.keras.callbacks import ModelCheckpoint, TensorBoard
from sklearn import preprocessing
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
from yahoo_fin import stock_info as si
from collections import deque
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import random
import time
import os

We are using yahoo_fin module, it is essentially a Python scraper that extracts finance data from Yahoo Finance platform, so it isn't a reliable API, feel free to use other data sources such as Alpha Vantage.

Learn also: How to Build a Spam Classifier using Keras in Python.

Preparing the Dataset

As a first step, we need to write a function that downloads the dataset from the Internet and preprocess it:

def load_data(ticker, n_steps=50, scale=True, shuffle=True, lookup_step=1, 
                test_size=0.2, feature_columns=['adjclose', 'volume', 'open', 'high', 'low']):
    # see if ticker is already a loaded stock from yahoo finance
    if isinstance(ticker, str):
        # load it from yahoo_fin library
        df = si.get_data(ticker)
    elif isinstance(ticker, pd.DataFrame):
        # already loaded, use it directly
        df = ticker
    # this will contain all the elements we want to return from this function
    result = {}
    # we will also return the original dataframe itself
    result['df'] = df.copy()
    # make sure that the passed feature_columns exist in the dataframe
    for col in feature_columns:
        assert col in df.columns
    if scale:
        column_scaler = {}
        # scale the data (prices) from 0 to 1
        for column in feature_columns:
            scaler = preprocessing.MinMaxScaler()
            df[column] = scaler.fit_transform(np.expand_dims(df[column].values, axis=1))
            column_scaler[column] = scaler

        # add the MinMaxScaler instances to the result returned
        result["column_scaler"] = column_scaler
    # add the target column (label) by shifting by `lookup_step`
    df['future'] = df['adjclose'].shift(-lookup_step)
    # last `lookup_step` columns contains NaN in future column
    # get them before droping NaNs
    last_sequence = np.array(df[feature_columns].tail(lookup_step))
    # drop NaNs
    df.dropna(inplace=True)
    sequence_data = []
    sequences = deque(maxlen=n_steps)
    for entry, target in zip(df[feature_columns].values, df['future'].values):
        sequences.append(entry)
        if len(sequences) == n_steps:
            sequence_data.append([np.array(sequences), target])
    # get the last sequence by appending the last `n_step` sequence with `lookup_step` sequence
    # for instance, if n_steps=50 and lookup_step=10, last_sequence should be of 59 (that is 50+10-1) length
    # this last_sequence will be used to predict in future dates that are not available in the dataset
    last_sequence = list(sequences) + list(last_sequence)
    # shift the last sequence by -1
    last_sequence = np.array(pd.DataFrame(last_sequence).shift(-1).dropna())
    # add to result
    result['last_sequence'] = last_sequence
    # construct the X's and y's
    X, y = [], []
    for seq, target in sequence_data:
        X.append(seq)
        y.append(target)
    # convert to numpy arrays
    X = np.array(X)
    y = np.array(y)
    # reshape X to fit the neural network
    X = X.reshape((X.shape[0], X.shape[2], X.shape[1]))
    # split the dataset
    result["X_train"], result["X_test"], result["y_train"], result["y_test"] = train_test_split(X, y, test_size=test_size, shuffle=shuffle)
    # return the result
    return result

This function is long but handy, it accepts several arguments to be as flexible as possible.

The ticker argument is the ticker we want to load, for instance, you can use TSLA for Tesla stock market, AAPL for Apple and so on.

n_steps integer indicates the historical sequence length we want to use, some people call it the window size, recall that we are going to use a recurrent neural network, we need to feed in to the network a sequence data, choosing 50 means that we will use 50 days of stock prices to predict the next day.

scale is a boolean variable that indicates whether to scale prices from 0 to 1, we will set this to True as scaling high values from 0 to 1 will help the neural network to learn much faster and more effectively.

lookup_step is the future lookup step to predict, the default is set to 1 (e.g next day).

We will be using all the features available in this dataset, which are the open, high, low, volume and adjusted close.

The above function does the following:

  • First, it loads the dataset using stock_info.get_data() function in yahoo_fin module.
  • If the scale argument is passed as True, it will scale all the prices from 0 to 1 (including the volume) using the sklearn's MinMaxScaler class. Note that each column has its own scaler.
  • It then adds the future column which indicates the target values (the labels to predict, or the y's) by shifting the adjusted close column by lookup_step.
  • After that, it shuffles and splits the data and returns the result.

To understand even more better, I highly suggest you to manually print the output variable (result) and see how the features and labels are made.

Related: How to Make a Speech Emotion Recognizer Using Python And Scikit-learn.

Model Creation

Now that we have a proper function to load and prepare the dataset, we need another core function to build our model:

def create_model(input_length, units=256, cell=LSTM, n_layers=2, dropout=0.3,
                loss="mean_absolute_error", optimizer="rmsprop"):
    model = Sequential()
    for i in range(n_layers):
        if i == 0:
            # first layer
            model.add(cell(units, return_sequences=True, input_shape=(None, input_length)))
        elif i == n_layers - 1:
            # last layer
            model.add(cell(units, return_sequences=False))
        else:
            # hidden layers
            model.add(cell(units, return_sequences=True))
        # add dropout after each layer
        model.add(Dropout(dropout))
    model.add(Dense(1, activation="linear"))
    model.compile(loss=loss, metrics=["mean_absolute_error"], optimizer=optimizer)
    return model

Again, this function is flexible too, you can change the number of layers, dropout rate, the RNN cell, loss and the optimizer used to compile the model.

The above function constructs a RNN that has a dense layer as output layer with 1 neuron, this model requires a sequence of features of input_length (in this case, we will pass 50) consecutive time steps (which is days in this dataset) and outputs a single value which indicates the price of the next time step.

Training the Model

Now that we have all the core functions ready, let's train our model, but before we do that, let's initialize all our parameters (so you can edit them later on your needs):

# Window size or the sequence length
N_STEPS = 50
# Lookup step, 1 is the next day
LOOKUP_STEP = 1
# test ratio size, 0.2 is 20%
TEST_SIZE = 0.2
# features to use
FEATURE_COLUMNS = ["adjclose", "volume", "open", "high", "low"]
# date now
date_now = time.strftime("%Y-%m-%d")
### model parameters
N_LAYERS = 3
# LSTM cell
CELL = LSTM
# 256 LSTM neurons
UNITS = 256
# 40% dropout
DROPOUT = 0.4
### training parameters
# mean squared error loss
LOSS = "mse"
OPTIMIZER = "rmsprop"
BATCH_SIZE = 64
EPOCHS = 300
# Apple stock market
ticker = "AAPL"
ticker_data_filename = os.path.join("data", f"{ticker}_{date_now}.csv")
# model name to save
model_name = f"{date_now}_{ticker}-{LOSS}-{CELL.__name__}-seq-{N_STEPS}-step-{LOOKUP_STEP}-layers-{N_LAYERS}-units-{UNITS}"

Let's make sure the results, logs and data folders exist before we train:

# create these folders if they does not exist
if not os.path.isdir("results"):
    os.mkdir("results")
if not os.path.isdir("logs"):
    os.mkdir("logs")
if not os.path.isdir("data"):
    os.mkdir("data")

Finally, let's train the model:

# load the data
data = load_data(ticker, N_STEPS, lookup_step=LOOKUP_STEP, test_size=TEST_SIZE, feature_columns=FEATURE_COLUMNS)
# construct the model
model = create_model(N_STEPS, loss=LOSS, units=UNITS, cell=CELL, n_layers=N_LAYERS,
                    dropout=DROPOUT, optimizer=OPTIMIZER)
# some tensorflow callbacks
checkpointer = ModelCheckpoint(os.path.join("results", model_name), save_best_only=True, verbose=1)
tensorboard = TensorBoard(log_dir=os.path.join("logs", model_name))

history = model.fit(data["X_train"], data["y_train"],
                    batch_size=BATCH_SIZE,
                    epochs=EPOCHS,
                    validation_data=(data["X_test"], data["y_test"]),
                    callbacks=[checkpointer, tensorboard],
                    verbose=1)
model.save(os.path.join("results", model_name) + ".h5")

We used ModelCheckpoint that saves our model in each epoch during the training. We also used TensorBoard to visualize the model performance in the training process.

After running the above block of code, it will train the model for 300 epochs, so it will take some time, here is the first output lines:

Epoch 1/300
3510/3510 [==============================] - 21s 6ms/sample - loss: 0.0117 - mean_absolute_error: 0.0515 - val_loss: 0.0065 - val_mean_absolute_error: 0.0487
Epoch 2/300
3264/3510 [==========================>...] - ETA: 0s - loss: 0.0049 - mean_absolute_error: 0.0352
Epoch 00002: val_loss did not improve from 0.00650
3510/3510 [==============================] - 1s 309us/sample - loss: 0.0051 - mean_absolute_error: 0.0357 - val_loss: 0.0082 - val_mean_absolute_error: 0.0494
Epoch 3/300
3456/3510 [============================>.] - ETA: 0s - loss: 0.0039 - mean_absolute_error: 0.0329
Epoch 00003: val_loss improved from 0.00650 to 0.00095, saving model to results\2020-01-08_NFLX-mse-LSTM-seq-50-step-1-layers-3-units-256
3510/3510 [==============================] - 14s 4ms/sample - loss: 0.0039 - mean_absolute_error: 0.0328 - val_loss: 9.5337e-04 - val_mean_absolute_error: 0.0150
Epoch 4/300
3264/3510 [==========================>...] - ETA: 0s - loss: 0.0034 - mean_absolute_error: 0.0304
Epoch 00004: val_loss did not improve from 0.00095
3510/3510 [==============================] - 1s 222us/sample - loss: 0.0037 - mean_absolute_error: 0.0316 - val_loss: 0.0034 - val_mean_absolute_error: 0.0300

After the training ends (or during the training), try to run tensorboard using this command:

tensorboard --logdir="logs"

Now this will start a local HTTP server "localhost:6006", after going to the browser, you'll see something similar to this:

Mean Squared Error over time during the training of Stock price predictor

The loss is the Mean Squared Error as specified in the create_model() function, the orange curve is the training loss, whereas the blue curve is what we care about the most, the validation loss. As you can see, it is significantly decreasing over time, so this is working !

Testing the Model

Before we test our model, we gonna need to reload the data with no shuffling, as we are going to plot the stock price curve in the correct order:

data = load_data(ticker, N_STEPS, lookup_step=LOOKUP_STEP, test_size=TEST_SIZE,
                feature_columns=FEATURE_COLUMNS, shuffle=False)

# construct the model
model = create_model(N_STEPS, loss=LOSS, units=UNITS, cell=CELL, n_layers=N_LAYERS,
                    dropout=DROPOUT, optimizer=OPTIMIZER)

model_path = os.path.join("results", model_name) + ".h5"
model.load_weights(model_path)

If you're following along with a notebook, you shouldn't reconstruct the model and load the weights, so you need to comment it out. However, if you're using another Python file for testing, then you should do that.

Now let's test our model:

# evaluate the model
mse, mae = model.evaluate(data["X_test"], data["y_test"])
# calculate the mean absolute error (inverse scaling)
mean_absolute_error = data["column_scaler"]["adjclose"].inverse_transform(mae.reshape(1, -1))[0][0]
print("Mean Absolute Error:", mean_absolute_error)

Remember that the output will be a value between 0 to 1, so we need to get it back to a real price value, here is the output:

Mean Absolute Error: 4.4244003

Not bad, in average, the predicted price is only far to the real price by 4.42$.

Alright, let's try to predict the future price of Apple Stock Market:

def predict(model, data, classification=False):
    # retrieve the last sequence from data
    last_sequence = data["last_sequence"][:N_STEPS]
    # retrieve the column scalers
    column_scaler = data["column_scaler"]
    # reshape the last sequence
    last_sequence = last_sequence.reshape((last_sequence.shape[1], last_sequence.shape[0]))
    # expand dimension
    last_sequence = np.expand_dims(last_sequence, axis=0)
    # get the prediction (scaled from 0 to 1)
    prediction = model.predict(last_sequence)
    # get the price (by inverting the scaling)
    predicted_price = column_scaler["adjclose"].inverse_transform(prediction)[0][0]
    return predicted_price

This function uses the last_sequence variable we saved in the load_data() function, which is basically the last sequence of prices, we use it to predict the next price, let's call this:

# predict the future price
future_price = predict(model, data)
print(f"Future price after {LOOKUP_STEP} days is {future_price:.2f}$")

Output:

Future price after 1 days is 308.20$

Sounds interesting ! The last price was 298.45$, the model is saying that the next day, it will be 308.20$. The model just used 50 days of features to be able to get that value, let's plot the prices and see:

def plot_graph(model, data):
    y_test = data["y_test"]
    X_test = data["X_test"]
    y_pred = model.predict(X_test)
    y_test = np.squeeze(data["column_scaler"]["adjclose"].inverse_transform(np.expand_dims(y_test, axis=0)))
    y_pred = np.squeeze(data["column_scaler"]["adjclose"].inverse_transform(y_pred))
    # last 200 days, feel free to edit that
    plt.plot(y_test[-200:], c='b')
    plt.plot(y_pred[-200:], c='r')
    plt.xlabel("Days")
    plt.ylabel("Price")
    plt.legend(["Actual Price", "Predicted Price"])
    plt.show()

This function plots the last 200 days of the test set (you can edit it as you wish) as well as the predicted prices, let's call it and see how it looks like:

plot_graph(model, data)

Result:

Prediction and actual prices

Great, as you can see, the blue curve is the actual test set, and the red curve is the predicted prices ! Notice that the stock price recently is dramatically increasing, that's why the model predicted 308$ for the next day.

Until now, we have used to predict only the next day, I have tried to build other models that use different lookup_steps, here is an interesting result in tensorboard:

Lookup step different models

Interestingly enough, the blue curve is the model we used in the tutorial, which uses the next timestep stock price as the label, whereas the green and orange curves used 10 and 30 lookup steps respectively, for instance, in this example, the orange model predicts the stock price after 30 days, which is a great model for more long term investments (which is usually the case).

Now you may think that, but what if we just want to predict if the price is going to rise or fall, not the actual price value as we did here, well you can do it using one of the two ways, one is you compare the predicted price with the current price and you make the decision, or you build an entire model and change the last output's activation function to sigmoid, as well as the loss and the metrics.

The below function calculates the accuracy score by converting the predicted price to 0 or 1 (0 indicates that the price went down, and 1 indicates that it went up):

def get_accuracy(model, data):
    y_test = data["y_test"]
    X_test = data["X_test"]
    y_pred = model.predict(X_test)
    y_test = np.squeeze(data["column_scaler"]["adjclose"].inverse_transform(np.expand_dims(y_test, axis=0)))
    y_pred = np.squeeze(data["column_scaler"]["adjclose"].inverse_transform(y_pred))
    y_pred = list(map(lambda current, future: int(float(future) > float(current)), y_test[:-LOOKUP_STEP], y_pred[LOOKUP_STEP:]))
    y_test = list(map(lambda current, future: int(float(future) > float(current)), y_test[:-LOOKUP_STEP], y_test[LOOKUP_STEP:]))
    return accuracy_score(y_test, y_pred)

Now let's call the function:

print(LOOKUP_STEP + ":", "Accuracy Score:", get_accuracy(model, data))

Here is the result for the three models that use different lookup_steps:

1: Accuracy Score: 0.5227156712608474
10: Accuracy Score: 0.6069779374037968
30: Accuracy Score: 0.704935064935065

As you may notice, the model predicts more accurately in long term prices, it reaches about 70.5% when we train the model to predict the price of the next month (30 lookup steps), and it reaches about 86.6% accuracy when using 50 lookup steps and 70 sequence length (N_STEPS).

Conclusion

Alright, that's it for this tutorial, you can tweak the parameters and see how you can improve the model performance, try to train on more epochs, say 500 or even more, increase or decrease the BATCH_SIZE and see if does change to the better, or play around with N_STEPS and LOOKUP_STEPS and see which combination works best.

You can also change the model parameters such as increasing the number of layers or the number of LSTM units, or even try the GRU cell instead of LSTM.

Note that there are other features and indicators to use, in order to improve the prediction, it is often known to use some other information as features, such as the company product innovation, interest rate, exchange rate, public policy, the web and financial news and even the number of employees !

I encourage you to change the model architecture, try to use CNNs or Seq2Seq models, or even add bidirectional LSTMs to this existing model, see if you can improve it !

Also, use different stock markets, check the Yahoo Finance page and see which one you actually want !

If you're not using a notebook or an interactive shell, I have splitted the code to different Python files, each one for its purpose, check it here.

Read alsoHow to Make an Image Classifier in Python using Keras.

Happy Training ♥

View Full Code
Sharing is caring!



Read Also





Comment panel

   
Comment system is still in Beta, if you find any bug, please consider contacting us here.