This article is contributed. See the original author and article here.

AIHack.png


Introduction & Profiles


Hi there everyone! We are Team 21, first place prize winners of the Imperial College London Data Science Society’s (ICDSS) 2021 AI Hackathon for the ‘Kaiko Cryptocurrency Challenge’. We are Howard, a penultimate Mechanical Engineering student and Stephanie, a penultimate Molecular Bioengineering student from Imperial College London.


Check out the full report and code in this repo: https://github.com/howardwong97/AI-Hack-2021-Team-21-Submission .


 


Feel free to contact us if you have any questions!













Howard.jpg

Linkedin: https://www.linkedin.com/in/howardwongjy/


GitHub: https://github.com/howardwong97


 


Stephanie.jpg

Linkedin: https://www.linkedin.com/in/stephaniesze/


GitHub: https://github.com/steph128



 


Kaiko Cryptocurrency Challenge and Our Motivation


The Kaiko Cryptocurrency Challenge provided cryptocurrency market data to create a predictive model. We tackled this challenge by investigating the effectiveness of traditional time series models in predicting volatility in the cryptocurrency market and the effect of introducing social media sentiment.  


If we look at the Bitcoin volatility index, the latest 30-day estimate for the BTC/USD pair is 4.30%. There are several factors contributing to the high volatility in cryptocurrency prices: low liquidity, minimal regulation, and the fact that it’s a very young market. It is incredibly difficult to apply fundamental analysis and so the values of cryptocurrencies are mostly driven by speculation. Social media, therefore, makes a huge impact. Take the tweet from Elon Musk about Dogecoin, for example, we observed a dramatic price drop and increased volatility. Although we can’t say with certainty that what happened was a direct result of the tweet, we cannot underestimate the effect of social media on the cryptocurrency market.


 


Exploratory Data Analysis


Instead of working directly with prices, we compute the returns, which normalizes the data to provide a comparable metric. Furthermore, we take the log of the returns, which has the desirable property of additivity. Denoted by , the log returns can be written as
rtlog.JPG


 


The histogram of log returns is plotted below. It is often assumed that log returns, especially in the equities market, are normally distributed. The unimodal distribution seems to agree with this assumption. However, the negative skew and excess kurtosis suggests that this is not the case!


 


Histogramoflogreturns.png


 


















Mean



Variance



Skew



Excess Kurtosis



-0.000002



4.68e-07



-4.26



179.5



 


We are interested in modelling the serial correlation observed in the log returns. The autocorrelation function (ACF) plot suggests that there is significant serial correlation. In addition, plotting the partial autocorrelation function (PACF) of the squared log returns allows shows autoregressive conditional heteroskedastic effects (more on this later). In other words, the volatility is not serially independent.


 


LogPatterns.png


 


Lastly, we talk about the concept of stationarity. Roughly speaking, a time series is said to be weakly stationary if both the mean of  Rt.JPG and the covariance of Rt.JPG and Rt-1.JPG  are time invariant. This is the foundation of time series analysis; the mean is only informative if the expected value remains constant across time periods. Therefore, we performed the Augmented Dickey-Fuller unit-root test and confirmed that the log returns is indeed stationary.


 


Time Series Analysis


A mixed autoregressive moving average process, or ARMA, is written as


 


rt1.JPG


One of the assumptions of ARMA is that the error process,Et1.JPG , is homoscedastic or constant over time. However, we have seen from the PACF plot of the squared log returns that this might not be the case. Volatility has some interesting characteristics. Firstly, asset returns tend to exhibit volatility clustering; volatility tends to remain high (or low) over long periods. Secondly, volatility evolves in a continuous manner; large jumps in volatility are rare. This is where volatility models come in. The idea of autoregressive conditional heteroscedasticity (ARCH) is that the variance of the current error term  is dependent on previous shocks. An ARCH model assumes.


 


Et.JPG


 


Generalised ARCH (GARCH) builds upon ARCH by allowing lagged conditional variances to enter the model as well:
ot.JPG


The constants W.JPGa.JPG and B.JPG are parameters to be estimated. a.JPG can be interpreted as a measure of the reaction of the volatility to market shocks, while B.JPG measures its persistence. Therefore, ARMA specifies the structure of the conditional mean of log returns, while GARCH specifies the structure of the conditional variance. Put together, an ARMA-GARCH can be summarised as


 


rt2.JPG


Forecasting Volatility


Another interesting property of volatility is that it is not directly observable. For example, if we had daily log returns data for BTC, we cannot establish the daily volatility. However, data with finer granularity (e.g., one-minute data) is available, one can estimate this by taking the sample standard deviation over a single trading day. Therefore, we used the following forecasting scheme:



  1. Reduce the resolution of the log returns to five-minute intervals. Since log returns are additive, we can simply sum the log returns Tn.JPG  to tn+5.JPG .

  2. Compute the realized volatility for each five-minute period.

  3. Use a rolling window of 120 samples to fit ARMA-GARCH using maximum likelihood.

  4. Use fitted parameter estimates to compute the forecasted volatility for the next five-minute interval.


Fitting the model on a rolling window and then forecasting the following period’s five-minute volatility ensure that we avoid look-ahead bias.


 


RealizedVolatility.png


The results are plotted above. Clearly, the ARMA GARCH model did not perform very well! Indeed, we have fitted ARMA(1,1) and GARCH(1,1) for simplicity; other lag orders could be necessary. One could also argue that the models were also over a relatively short timeframe.


 


Sentiment Analysis


There are many flavours of GARCH (e.g. I-GARCH, E-GARCH). However, we are interested in exploring the possibility of introducing sentiment regressors to the GARCH model specification. It is straightforward to introduce additional terms, i.e.


 


ot2.JPG


where xt-1.JPG  is an additional explanatory variable, and  is a new parameter to be estimated.


 


So how do we measure sentiment? For this, we turn to Reddit, which provides an API for searching for posts and comments. We performed a search for “Bitcoin” and “BTC” across several subreddits (yes, including WallStreetBets).


 


What remains is to engineer features for our model. There are two key features that we saw to be the most informative:



  • Frequency – how many times Bitcoin has been mentioned on Reddit within a timeframe?

  • Sentiment – what is the overall sentiment (positive or negative)?


Indeed, upvotes would have been a good feature to include too, as it is an indication of the reach of the post or comment. However, we did not include this in this project.


 


Natural Language Processing (NLP) techniques have been utilised in the past to detect sentiment as positive or negative. However, comments about the financial markets are unique in terms of terminology. Therefore, a domain-specific corpus must be built to train a sentiment model. Conveniently, Stocktwits is a site where users can label their own comments as either “bullish” or “bearish”, so this would be the perfect source for training data. In our past work, we scraped thousands of posts and trained a RoBERTa model.


 


What is RoBERTa? Many are familiar with BERT, the self-supervised method released by Google in 2018. Researchers at the University of Washington built upon this by removing BERT’s next-sentence pretraining objective, and training with much larger mini-batches and learning rates. We chose this due to its promise of better downstream task performance – this is especially important in a 24-hour Hackathon!


 


Feature Engineering


Having scraped all mentions of Bitcoin on Reddit over the period, we made sentiment predictions using our financial RoBERTa model, which labels each comment as “Bullish” (positive) or “Bearish” (negative). We created the following features:



  • N, the number of comments made about BTC in the past hour.

  • S, computed by defining  SBullish.JPGand SBearish.JPG and summing these for each comment in the past hour.


The new GARCH specification is now


ot2=w.JPG


It is important to ensure that N and S are synchronous with the log returns (i.e. the post or comment was published at or before the time period of interest).


 


Reddit.JPG


 Results


So, how did our new sentiment-based model perform? Terribly! In fact, the mean square error (MSE) of this new model was about ten times worse than the original model. There are clearly many pitfalls in the work that we have presented here. Our sentiment model was clearly very simplistic as it only provided a ‘bullish’ or ‘bearish’ signal. The Reddit dataset that we created was also relatively small – there are other sources of news that we could have used. One could also argue that our sentiment model was incapable of identifying bots deployed to manipulate sentiment models such as this one.


 


Something also must be said about the efficacy of traditional time series models. GARCH models have historically been rather effective in forecasting daily volatility. However, our intuition tells us that social media sentiment clearly plays a big factor. Our future work will be focused on thinking of more appropriate ways of integrating this into our model.

Resources 
Microsoft Learn BlockChain 
Beginners Guide to BlockChain on Azure

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.