Search icon
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
Machine Learning for Algorithmic Trading - Second Edition

You're reading from  Machine Learning for Algorithmic Trading - Second Edition

Product type Book
Published in Jul 2020
Publisher Packt
ISBN-13 9781839217715
Pages 822 pages
Edition 2nd Edition
Languages
Author (1):
Stefan Jansen Stefan Jansen
Profile icon Stefan Jansen

Table of Contents (27) Chapters

Preface 1. Machine Learning for Trading – From Idea to Execution 2. Market and Fundamental Data – Sources and Techniques 3. Alternative Data for Finance – Categories and Use Cases 4. Financial Feature Engineering – How to Research Alpha Factors 5. Portfolio Optimization and Performance Evaluation 6. The Machine Learning Process 7. Linear Models – From Risk Factors to Return Forecasts 8. The ML4T Workflow – From Model to Strategy Backtesting 9. Time-Series Models for Volatility Forecasts and Statistical Arbitrage 10. Bayesian ML – Dynamic Sharpe Ratios and Pairs Trading 11. Random Forests – A Long-Short Strategy for Japanese Stocks 12. Boosting Your Trading Strategy 13. Data-Driven Risk Factors and Asset Allocation with Unsupervised Learning 14. Text Data for Trading – Sentiment Analysis 15. Topic Modeling – Summarizing Financial News 16. Word Embeddings for Earnings Calls and SEC Filings 17. Deep Learning for Trading 18. CNNs for Financial Time Series and Satellite Images 19. RNNs for Multivariate Time Series and Sentiment Analysis 20. Autoencoders for Conditional Risk Factors and Asset Pricing 21. Generative Adversarial Networks for Synthetic Time-Series Data 22. Deep Reinforcement Learning – Building a Trading Agent 23. Conclusions and Next Steps 24. References
25. Index
Appendix: Alpha Factor Library

Topic Modeling – Summarizing Financial News

In the last chapter, we used the bag-of-words (BOW) model to convert unstructured text data into a numerical format. This model abstracts from word order and represents documents as word vectors, where each entry represents the relevance of a token to the document. The resulting document-term matrix (DTM)—or transposed as the term-document matrix—is useful for comparing documents to each other or a query vector for similarity based on their token content and, therefore, finding the proverbial needle in a haystack. It provides informative features to classify documents, such as in our sentiment analysis examples.

However, this document model produces both high-dimensional data and very sparse data, yet it does little to summarize the content or get closer to understanding what it is about. In this chapter, we will use unsupervised machine learning to extract hidden themes from documents using topic modeling....

Learning latent topics – Goals and approaches

Topic modeling discovers hidden themes that capture semantic information beyond individual words in a body of documents. It aims to address a key challenge for a machine learning algorithm that learns from text data by transcending the lexical level of "what actually has been written" to the semantic level of "what was intended." The resulting topics can be used to annotate documents based on their association with various topics.

In practical terms, topic models automatically summarize large collections of documents to facilitate organization and management as well as search and recommendations. At the same time, it enables the understanding of documents to the extent that humans can interpret the descriptions of topics.

Topic models also mitigate the curse of dimensionality that often plagues the BOW model; representing documents with high-dimensional, sparse vectors can make similarity measures noisy...

Probabilistic latent semantic analysis

Probabilistic latent semantic analysis (pLSA) takes a statistical perspective on LSI/LSA and creates a generative model to address the lack of theoretical underpinnings of LSA (Hofmann 2001).

pLSA explicitly models the probability word w appearing in document d, as described by the DTM as a mixture of conditionally independent multinomial distributions that involve topics t.

There are both symmetric and asymmetric formulations of how word-document co-occurrences come about. The former assumes that both words and documents are generated by the latent topic class. In contrast, the asymmetric model assumes that topics are selected given the document, and words result in a second step given the topic.

The number of topics is a hyperparameter chosen prior to training and is not learned from the data.

The plate notation in Figure 15.4 describes the statistical dependencies in a probabilistic model. More specifically,...

Latent Dirichlet allocation

Latent Dirichlet allocation (LDA) extends pLSA by adding a generative process for topics (Blei, Ng, and Jordan 2003). It is the most popular topic model because it tends to produce meaningful topics that humans can relate to, can assign topics to new documents, and is extensible. Variants of LDA models can include metadata, like authors or image data, or learn hierarchical topics.

How LDA works

LDA is a hierarchical Bayesian model that assumes topics are probability distributions over words, and documents are distributions over topics. More specifically, the model assumes that topics follow a sparse Dirichlet distribution, which implies that documents reflect only a small set of topics, and topics use only a limited number of terms frequently.

The Dirichlet distribution

The Dirichlet distribution produces probability vectors that can be used as a discrete probability distribution. That is, it randomly generates a given number of values that...

Modeling topics discussed in earnings calls

In Chapter 3, Alternative Data for Finance – Categories and Use Cases, we learned how to scrape earnings call data from the SeekingAlpha site. In this section, we will illustrate topic modeling using this source. I'm using a sample of some 700 earnings call transcripts between 2018 and 2019. This is a fairly small dataset; for a practical application, we would need a larger dataset.

The directory earnings_calls contains several files with the code examples used in this section. Refer to the notebook lda_earnings_calls for details on loading, exploring, and preprocessing the data, as well as training and evaluating individual models, and the run_experiments.py file for the experiments described next.

Data preprocessing

The transcripts consist of individual statements by company representatives, an operator, and a Q&A session with analysts. We will treat each of these statements as separate documents, ignoring operator...

Topic modeling for with financial news

The notebook lda_financial_news contains an example of LDA applied to a subset of over 306,000 financial news articles from the first five months of 2018. The datasets have been posted on Kaggle, and the articles have been sourced from CNBC, Reuters, the Wall Street Journal, and more. The notebook contains download instructions.

We select the most relevant 120,000 articles based on their section titles with a total of 54 million tokens for an average word count of 429 words per article. To prepare the data for the LDA model, we rely on spaCy to remove numbers and punctuation and lemmatize the results.

Figure 15.14 highlights the remaining most frequent tokens and the article length distribution with a median length of 231 tokens; the 90th percentile is 642 words.

Figure 15.14: Corpus statistics for financial news data

In Figure 15.15, we show results for one model using a vocabulary of 3,570 tokens based on...

Summary

In this chapter, we explored the use of topic modeling to gain insights into the content of a large collection of documents. We covered latent semantic indexing that uses dimensionality reduction of the DTM to project documents into a latent topic space. While effective in addressing the curse of dimensionality caused by high-dimensional word vectors, it does not capture much semantic information. Probabilistic models make explicit assumptions about the interplay of documents, topics, and words that allow algorithms to reverse engineer the document generation process and evaluate the model fit on new documents. We learned that LDA is capable of extracting plausible topics that allow us to gain a high-level understanding of large amounts of text in an automated way, while also identifying relevant documents in a targeted way.

In the next chapter, we will learn how to train neural networks that embed individual words in a high-dimensional vector space that captures important...

lock icon The rest of the chapter is locked
You have been reading a chapter from
Machine Learning for Algorithmic Trading - Second Edition
Published in: Jul 2020 Publisher: Packt ISBN-13: 9781839217715
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at €14.99/month. Cancel anytime}