The course explores automated decision-making from a computational perspective through a combination of classic papers and more recent work. It examines efficient algorithms, where they exist, for learning single-agent and multi-agent behavioral policies and approaches to learning near-optimal decisions from experience.
Topics include Markov decision processes, stochastic and repeated games, partially observable Markov decision processes, reinforcement learning, deep reinforcement learning, and multi-agent deep reinforcement learning. Of particular interest will be issues of generalization, exploration, and representation. We will cover these topics through lecture videos, paper readings, and the book Reinforcement Learning by Sutton and Barto.
Students will replicate a result in a published paper in the area and work on more complex environments, such as those found in the OpenAI Gym library. Additionally, students will train agents to solve a more complex, multi-agent environment, namely the Google Research Football environment, and will have an opportunity to develop state-of-the-art or novel techniques.
The ability to run Docker locally or utilize a cloud computing service is strongly recommended. The instructional staff will not provide technical support or cloud computing credits.
Note: Sample syllabi are provided for informational purposes only. For the most up-to-date information, consult the official course documentation.
To access the public version of this course's content, click here, then log into your Ed Lessons account. If you have not already created an Ed Lessons account, enter your name and email address, then click the activation link sent to your email, then revisit that link.
Before Taking This Class...
Suggested Background Knowledge
Successful completion of “CS 7641: Machine Learning” is strongly recommended, especially understanding neural networks. Students should also be familiar with or willing to learn:
- Linear Algebra, Calculus, and Statistics
- Scientific computing on Python using NumPy
- Training and evaluating neural networks using PyTorch
- Using Jupyter notebooks to experiment with algorithms
- Using MatPlotLib or other visualization software to create graphs
Technical Requirements and Software
- Browser and connection speed: An up-to-date version of Chrome or Firefox is strongly recommended. We also support Internet Explorer 9 and the desktop versions of Internet Explorer 10 and above (not the metro versions). 2+ Mbps is recommended; the minimum requirement is 0.768 Mbps download speed.
- Operating system:
- Ubuntu Linux 20.04 or higher is recommended
- PC: Windows XP or higher with the latest updates installed, or Mac: OS X 10.6 or higher with the latest updates installed will be required for the final exam
- CPU: An x86-64 CPU (Intel or AMD) is strongly recommended, as specific scientific computing packages may not have ARM native code. It will be up to you to set up a suitable environment to conduct your experiments if you use an ARM-based processor, which may require obtaining a cloud instance with enough compute power to run these experiments at your sole cost.
- (Optional but strongly recommended) We will provide you with a Dockerfile that contains a suitable setup to do RL research. To run this environment, you will need at least 50GB of open hard disk space, 8GB of RAM. A CUDA-compatible GPU is recommended. The teaching staff will provide this file during the course.
All Georgia Tech students are expected to uphold the Georgia Tech Academic Honor Code. This course may impose additional academic integrity stipulations; consult the official course documentation for more information.