Neural Network Research to Predict Gap Risk for Financial Assets

I set out to explore different methodologies applied to Neural Networks to predict gap risk on financial assets.

This is a blog I aim to update on a weekly basis sharing my journey for this research project.

Motivation For This Post

The inspiration for this project stems from the interest to continue learning after completing in June 2024 a 6-month Professional Certification at Imperial Business School on Machine Learning and Artificial Intelligence.

I continue experimenting with neural networks on this project. I am privileged to be guided by Ali Muhammad, who lectured me during my certification at Imperial.

In this project we use different neural network approaches to estimate gap risk on the price of financial assets, and each approach is held in a subdirectory in this repo. The projects in this repo may slightly deviate from this objective as I explore and research associated predictions that help me build towards the end goal.

This project is work in progress and this page serves as a log for this amazing journey.

Open Source Code Available

Pytorch powered project stack

Work in progress:

CNN training by encoding price time series into Gramian Angular Field (GAF) images

Application of Recurrent Neural Network with Long-short Term Memory Models

Get Github Repo

At the beginning before starting this blog post

I had a basic CNN jupyter notebook that attempted to predict next day prices based on a time series. I had then been studying RNNs.

Week 1

  • The CNN results are not optimal with accuracy in the low 20s
  • I had thus started research on LSTMs and preparing a simple LSTM model to understand this better
  • After meeting with Ali, I have decided that an approach that is not “fail fast” can bring benefits to my edification and eventually this research even if the results are bad
  • Thus I have paused the development on the RNN and I am digging deeper on the CNN
  • I have refactored the CNN from a Jupyter Notebook to python scripts and helper function modules. The model continues to yield unstable results
  • Encoded images correlation drops to 60% whereas the actual price time series used to build these images is >99%. This is an area I need to research and understand this difference by encoding images with different parameters
  • I intend to run a grid search for the above-mentioned image generation algorithm

Week 2

  • I have re-trained and validated several scenarios for the CNN using GAF images. Unfortunately I had to stop because the results were hard to track. The analysis is therefore incomplete.
  • The results for these simulations so far are more encouraging: validation dataset with ~60% correlation between the training/validation GAF images dataset yield [Accuracy at 1 decimal place = 44%, R^2=46%, MSE=3%].
  • I still need to dig to understand the correlation between these images for different dataset but it’s clearer what hyperparameters help
  • I ask my professor for guidance how to better track results and suggests mlflow. I will spend the net week setting it up.

Week 3

  • I run mlflow server locally and I have setup a public mlflow server run on a docker container that servers my results from storage and the database. Unfortunately and for now, it serves from a cheap Azure SQL database so it’s not the fastest to show results
  • Mlflow was particularly useful to visualize the results from encoding images with different parameters. I compared scenario results between encoding the time series into images using MarkovTransitionField vs GramianAngularField (GAF), and the latter is best performing with for summation method gaf_sample_range (-1,0.5) MinMax(range -1,x) scaler and dropout 0.25 to 0.5. These results are stored in Mlflow gaprisk-experiment-005.
  • Several runs did not converge training endlesslesly through the 10k-15k epochs. I have now added pytorch LRScheduler with Min to dynamically ratchet the learning rate down and abandons training if there is no loss improvement after 300 epochs in the case of 32×32 images.
  • I hit a wall: At the best accuracy result, the CNN gets stuck at a local minima ~0.15%. I am using multiple regularization methods: nn.BatchNorm1d, Leaky ReLU, momentum, He kaiming  weight initialization. I now start searching for alternative approaches that may help me reach global minima, the first being an increase the size of GAF images that may represent a larger time series cohort.
  • I have also found that the performance during model training is suboptimal, my RTX 3090 Ti GPU is humming at 30-50% capacity. I suspect the root of the issue is moving tensor data to CPU to perfom interim calculations during training.
  • MLflow has provided clarity in the inconsistency of results: there may be a bug in my calculation of R^2. I am going to refactor the making it more GPU optimal along with fixing this bug.
  • I will also re-run scenarios with alternative optim and loss function combinations than MSELoss and Adam.

Access The MLFlow Server

MLFlow Server

Username and Password: visitor

Week 4

  • In order to refactor the code on tensors, I realize I am lacking some important pytorch concepts on tensor operations. I am taken a few days to study this interesting course and this one.

Week 5

  • I have refactored code to keep all calculations unless necessary in the GPU. However mlflow logs require cpu metrics and this slows down training.
  • I have confirmed R^2 and accuracy calculations are correct, further confirmed with torcheval.metrics.functional.regression.r2_score. Since R^2 is calculated at 64-bit precision, it suggests poor predictions when compared to 1 decimal place accuracy results =~50%, unlike 1 decimal place accuracy results =~10%. Other error measures like RMSE are not as extreme.
  • In relation to the GPUs low utilization rate, I have confirmed the GPUs bottleneck is not the DataLoader and for the short dataset I use, the GPU’s highest utilization for the hyper-parameters that yield the best results is when DataLoader num_workers=0, even with higher batches run (best performing batch_size=512). Disabling mlflow logging which moves data to CPU increases GPU utilization to 100% as expected. GPU utlization is 20-40% when mlflow logs.
  • I have started running the accuracy analysis for larger images
  • I am now researching an embedding model suitable for univariate analysis.

Week 6

  • Completed testing the network with differerent image sizes: Using the best hyperparameters and model parameters I have found that due to the short time series in this dataset, I can only run 32,64,128 image sizes. 32×32 was the most performant at 48% accuracy and 42% R^2 for the 1 decimal place prediction calculation, whilst 2 decimal places accuracy continues to underperform at 4%. Interestingly, to reach training loss <0.1 for these larger images, I had to remove the ReduceLROnPlateau abandoning method during training.
  • Meeting with my professor productive as always. I have now adjusted the backlog priority on the back of this conversation.
  • Refactor to switch on/off mlflow context

Backlog

  • To address the problem of getting stuck at global minima, re-train existing arch/dataset with manual coded approach to the learning rate throughout epochs, increasing LR to bounce off local minima and reduce it to reach global minima.
  • Pre-processing/Transformations:
    • Training size too small. Add multiple stocks’ timeseries via concatination that exhibit gap risk for training, instead of the 1 I currently use. I’ll adjust the windows to start at the start of each stock time series. To start train the NN for 2 stocks that are highly correlated, then medium, then zero correlated to confirm the approach is robust. Next step enhance the number of stocks for homogenous stocks. Try 1k data points in time series, then 2k, then 3k.
    • Encode images pre-log delta and gamma transformation
    • Since it’s likely the dataset’s max-min is large and the data volatile, differencing transformation (i.e. abs % change) is unlikely to help the model on its learning
    • Embeddings – leaning towards stumpy library for matrix profiles with interesting applications.