aim
play

Aim Inspired by this I wanted to look into the following: Is it - PowerPoint PPT Presentation

Twitter sentiment vs. Stock price Background On April 24 th 2013, the Twitter account belonging to Associated Press was hacked. Fake posts about the Whitehouse being bombed and the President being injured were posted. This lead to a


  1. Twitter sentiment vs. Stock price �

  2. Background � • On April 24 th 2013, the Twitter account belonging to Associated Press was hacked. Fake posts about the Whitehouse being bombed and the President being injured were posted. This lead to a 1% loss on the Dow Jones. • On May 6 th 2010 a poorly written algorithm triggered a selling spree that caused a 9.2% drop of the Dow Jones. • Using text mining as part of trading algorithms is common, and more incidents similar to these have happened (e.g. fake news about American Airlines going bankrupt once made the stock price fall quickly). 2 �

  3. Aim � Inspired by this I wanted to look into the following: • Is it possible to collect posts from Twitter (known as tweets), that mention a specified stock ticker (Apple Inc. uses AAPL), calculate a sentiment score of these tweets and find a visual relationship between this score and the stocks current price ? When we say a visual relationship we mean that we want to plot the score and the price side by side and be able to visually see a relationship between them. More on this later … 3 �

  4. Method - High level perspective � • The general idea is to get all tweets for a specific hour, calculate the average sentiment score of these tweets, and plot it next to the closing price of the stock for that hour. • But what is a sentiment score ? 1. Find (or create) a corpus with tweets that are classified as positive or negative, create features and use in a naïve Bayes classifier (use the distribution rather than the label as the score) 2. Use a lexicon of sentiment tagged words, (e.g. bad could be negative and super could be good ). For each tweet count the number of positive and negative words and create a score from these counts. 4 �

  5. Approach 1 � • The first approach was built upon what we have seen in the labs, creating features and using a naïve Bayes classifier. • I found a corpus of 1 600 000 tweets that were labelled as positive or negative . Based on these I wanted to create features and use them in a naive Bayes classifier. • I created unigram, bigram and trigram features. Furthermore I created a TF-IDF index over these tweets and used it as a feature. I also partially used the second approach (lexicon of sentiment words, more on this later … ). 5 �

  6. Approach 1 � • However it turned out that after a few days trying to coerce my code to get this to work in reasonable time I failed. • Since each run was taking very long I decided that I needed to save the tokenized and cleaned tweets, along with their features (and the TF-IDF index) to disk. • However when trying to serialize the class structure I had created, the “pickle” module included in Python was using > 5GB of RAM to check for cycles in the objects that were saved, and it basically blew up every time (giving MemoryError). • So I had a choice of fixing this (and save to an SQL database rather than to a flat file), or find another approach … I decided to use another approach. 6 �

  7. Approach 2 � • I found three lexicons that all consisted of words with a positive or negative label attached to it. One of them also included the POS of the word used: • Example of first lexicon (8221 words): • word1=agony pos1=noun priorpolarity=negative • word1=agree pos1=verb priorpolarity=positive • Example of second lexicon (3642): • Consisted of two files, one with positive words: worst, wreck,... • And one with negative words: shield, shiny, … • Example of third lexicon (6787): • Consisted of two files, one with positive words: fine, flashy,... • And one with negative words: spooky, sporadic, … 7 �

  8. Approach 2 � • These lexicons were parsed and placed into a large lexicon (duplicates were allowed and not removed from the lexicon) • I then downloaded 7945 tweets that contained the word AAPL (the stock ticker for Apple Inc.) • For each of the tweets I did the following processing: • Lowercase, remove all http:// … and other URL structures, remove all usernames (i.e. @username), removed all multiple whitespaces (i.e. “ “ became “ “), replaced #word with word, replaced repetitions of letters to only two (e.g. yeeeeeehaaaaa became yeehaa), removed all words that did not start with a number (i.e. 3am was removed), stripped punctuations (!, ?, ., ,) 8 �

  9. Approach 2 � • Next step was to create the actual sentiment score. For each tweet I wanted to look up the tokens in my lexicon to try and decide if the token was positive or negative. • Since one of my lexicons also contained the POS of the word each of my tweets were subjected to POS tagging. • Each token of a tweet was sent to the lexicon (along with the POS tag) and a sentiment was returned. • I did a simple count of the positive and negative words. 9 �

  10. Approach 2 � • Since multiple lexicons were included in my larger lexicon I needed a way of decided which lexicon to trust for a given word (since there was some overlap between the lexicons) • The following algorithm was created to solve ties: 1. If there is only one lexicon that contains the word then this lexicon wins. 2. If the token and POS matched the first lexicon then this lexicon wins. 3. If all lexicons agree on the sentiment then all win. 4. If lexicons disagree, then count (i.e. if one lexicon says positive, and the other two say negative then negative wins). 5. If it is still a tie then return neutral. 10 �

  11. Approach 2 � • So for each tweet there now exists a positive (p) and a negative (n) count, and the total number of tokens (N) . • The following two scores where then associated with each tweet: • Sentiment diff: p – n • Positive score: p / N • But I was not satisfied by this, because I felt that some words must be more negative than others, and some words must be more positive than others. 11 �

  12. Approach 2 � • The idea was then to create a TF-IDF index using the tokens in the lexicon (apprx. 8000 unique tokens) and 2000 tweets from the downloaded AAPL tweets. • This TF-IDF index was created (and since it was a reasonable size it could be serialized to disk). • The issue then arose that it was only really useful on the 2000 tweets that I used to create the TF-IDF, when incoming tweets were to be processed they did not belong to the index. 12 �

  13. Approach 2 � • So since ignorance is bliss I invented the average TF-IDF weight: • I calculated the average TF-IDF for each token in the index, saved this value and threw away all the other values in the index, creating a very compact index of average TF-IDF values. • So for any token (regardless of which tweet it came from) I could get an average weight for the token. • E.g. “good” could have weight “0.008” and “awesome” could have weight “0.1”. 13 �

  14. Approach 2 � • So armed with the average TF-IDF index I continued my sentiment scoring. • Instead of counting the positive and negative words I instead looked them up in the average TF-IDF index, and summed the weights. A weighted positive count (wp) and a weighted negative count (wn) gave the following scores: • Weighted sentiment diff: wp – wn • Weighted positive score: wp / N 14 �

  15. Plotting � • The 7945 tweets that were downloaded were grouped by hour, so all tweets that were posted between 11:01 AM and 12:00 AM were considered to belong to 12:00 AM. • For each grouping the individual sentiment score for each tweet was calculated (using all four sentiment scores discussed). The total sentiment score for the grouping was simply the average score. • From Google Finance hourly closing prices were downloaded for AAPL (this means that at time 11:00 AM the latest price AAPL was sold for is the closing price for this hour). 15 �

  16. Plots Sentiment difference (raw counts) � � At first glance visually useless, � however it is worth noting that the � maximum of each oscillation increases… � Note : The flat horizontal lines are created while the stock market is closed. � Hourly price and sentiment score between the 21 st of May and 27 th of May � 16 �

  17. Plots p / N (raw counts) � � Difficult to find anything visually appealing about this…. � Hourly price and positive score between the 21 st of May and 27 th of May � 17 �

  18. Plots wp / N (weighted sum) � � Just as bad as the positive score without the TF-IDF weighting…. � Hourly price and weighted sum between the 21 st of May and 27 th of May � 18 �

  19. Plots wp – wn (weighted difference) � � • “Chartists” - investors that mainly look at charts of price and volume rather than the fundamental data about a company. � • Looks for “trends” in the charts. 
 � • One of the classical ways of finding a trend it is to find “higher- lows”. � • The support lines drawn in the charts show that both the price and the sentiment are creating “higher lows”, indicating that the stock and the sentiment are entering (or already in) a period of upward trend. � 19 �

Recommend


More recommend