Reader small image

You're reading from  TensorFlow 2 Reinforcement Learning Cookbook

Product typeBook
Published inJan 2021
Reading LevelExpert
PublisherPackt
ISBN-139781838982546
Edition1st Edition
Languages
Right arrow
Author (1)
Palanisamy P
Palanisamy P
author image
Palanisamy P

Praveen Palanisamy works on developing autonomous intelligent systems. He is currently an AI researcher at General Motors R&D. He develops planning and decision-making algorithms and systems that use deep reinforcement learning for autonomous driving. Previously, he was at the Robotics Institute, Carnegie Mellon University, where he worked on autonomous navigation, including perception and AI for mobile robots. He has experience developing complete, autonomous, robotic systems from scratch.
Read more about Palanisamy P

Right arrow

Chapter 4: Reinforcement Learning in the Real World – Building Cryptocurrency Trading Agents

Deep reinforcement learning (deep RL) agents have a lot of potential when it comes to solving challenging problems in the real world and a lot of opportunities exist. However, only a few successful stories of using deep RL agents in the real world beyond games exist due to the various challenges associated with real-world deployments of RL agents. This chapter contains recipes that will help you successfully develop RL agents for an interesting and rewarding real-world problem: cryptocurrency trading. The recipes in this chapter contain information on how to implement custom OpenAI Gym-compatible learning environments for cryptocurrency trading with both discrete and continuous-value action spaces. In addition, you will learn how to build and train RL agents for trading cryptocurrency. Trading learning environments will also be provided.

Specifically, the following recipes will be...

Technical requirements

The code in the book has been extensively tested on Ubuntu 18.04 and Ubuntu 20.04 and should work with later versions of Ubuntu if Python 3.6+ is available. With Python 3.6+ installed, along with the necessary Python packages listed at the start of each of recipe, the code should run fine on Windows and macOS X too. You should create and use a Python virtual environment named tf2rl-cookbook to install the packages and run the code in this book. Installing Miniconda or Anaconda for Python virtual environment management is recommended.

The complete code for each recipe in each chapter is available here: https://github.com/PacktPublishing/Tensorflow-2-Reinforcement-Learning-Cookbook.

Building a Bitcoin trading RL platform using real market data

This recipe will help you build a cryptocurrency trading RL environment for your agents. This environment simulates a Bitcoin trading exchange based on real-world data from the Gemini cryptocurrency exchange. In this environment, your RL agent can place buy/sell/hold trades and get rewards based on the profit/loss it makes, starting with an initial cash balance in the agent's trading account.

Getting ready

To complete this recipe, make sure you have the latest version. You will need to activate the tf2rl-cookbook Python/conda virtual environment. Make sure to update the environment so that it matches the latest conda environment specification file (tfrl-cookbook.yml) in this cookbook's code repository. If the following import statements run without issues, you are ready to get started:

import os
import random
from typing import Dict
import gym
import numpy as np
import pandas as pd
from gym import spaces...

Building an Ethereum trading RL platform using price charts

This recipe will teach you to implement an Ethereum cryptocurrency trading environment for RL Agents with visual observations. The Agent will observe a price chart with Open, High, Low, Close, and Volume information over a specified time period to take an action (Hold, Buy, or Sell). The objective of the Agent is to maximize its reward, which is the profit you would make if you deployed the Agent to trade in your account!

Getting ready

To complete this recipe, make sure you have the latest version. You will need to activate the tf2rl-cookbook Python/conda virtual environment. Make sure that will update the environment so that it matches the latest conda environment specification file (tfrl-cookbook.yml), which can be found in this cookbook's code repository. If the following import statements run without any issues, you are ready to get started:

import os
import random
from typing import Dict
import cv2
import...

Building an advanced cryptocurrency trading platform for RL agents

Instead of allowing the Agent to only take discrete actions, such as buying/selling/holding a pre-set amount of Bitcoin or Ethereum tokens, what if we allowed the Agent to decide how many crypto coins/tokens it would like to buy or sell? That is exactly what this recipe will allow you to create in the form of a CryptoTradingVisualContinuousEnv RL environment.

Getting ready

To complete this recipe, you need to ensure you have the latest version. You will need to activate the tf2rl-cookbook Python/conda virtual environment. Make sure that you update the environment so that it matches the latest conda environment specification file (tfrl-cookbook.yml), which can be found in this cookbook's code repository. If the following import statements run without any issues, you are ready to get started:

import os
import random
from typing import Dict
import cv2
import gym
import numpy as np
import pandas as pd
from...

Training a cryptocurrency trading bot using RL

The soft actor-critic Agent is one of the most popular and state-of-the-art RL Agents available and is based on an off-policy, maximum entropy-based deep RL algorithm. This recipe provides all the ingredients you will need to build a soft actor-critic Agent from scratch using TensorFlow 2.x and train it for cryptocurrency (Bitcoin, Ethereum, and so on) trading using real data from the Gemini cryptocurrency exchange.

Getting ready

To complete this recipe, make sure you have the latest version. You will need to activate the tf2rl-cookbook Python/conda virtual environment. Make sure that you update the environment so that it matches the latest conda environment specification file (tfrl-cookbook.yml), which can be found in this cookbook's code repository. If the following import statements run without any issues, you are ready to get started:

mport functools
import os
import random
from collections import deque
from functools...
lock icon
The rest of the chapter is locked
You have been reading a chapter from
TensorFlow 2 Reinforcement Learning Cookbook
Published in: Jan 2021Publisher: PacktISBN-13: 9781838982546
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Author (1)

author image
Palanisamy P

Praveen Palanisamy works on developing autonomous intelligent systems. He is currently an AI researcher at General Motors R&D. He develops planning and decision-making algorithms and systems that use deep reinforcement learning for autonomous driving. Previously, he was at the Robotics Institute, Carnegie Mellon University, where he worked on autonomous navigation, including perception and AI for mobile robots. He has experience developing complete, autonomous, robotic systems from scratch.
Read more about Palanisamy P