Skip to main content

Master’s of Computer Science

play

Play Full Video

Gain the Knowledge You Need to Accelerate Your Career

Computer science is driving innovation in technology, finance, healthcare and beyond. The UT Austin online master’s degree in computer science gives you the skills to design, develop, and optimize the technologies we use to create, communicate and serve.

bar chart

Gain Critical CS Skills to Meet Industry Demand

light bulb

Earn Your Degree From a 
Top-Ranked CS School1

clock

Affordable, Advanced Degree 
Priced at $10,000+ Fees2

Curriculum

The UT Austin online CS master’s curriculum incorporates cutting-edge foundational coursework in computer science to help you develop a strong understanding of the field that can be directly applied to your career.

Our elective courses focus on content that is in high demand within the tech industry, including advanced operating systems, programming languages, online learning, optimization and machine learning. A variety of electives allow you to personalize your education.

Taught by tenured UT Computer Science faculty — many of whom are award-winning leaders in the CS research community — the MSCS program offers rigorous training to expand your expertise, and elective options tailored to your interests.

1Top Computer Science Schools, U.S. News & World Report, Ranked 2022-2023. 
2International student fees and late registration fees may apply.

Courses

required courses

three required courses

+

elective courses

seven elective courses

=

ten courses

Ten Courses

The online master’s degree in computer science is a 30-hour program consisting of nine hours of required courses and 21 hours of electives. Each course counts for 3 credit hours and you must take a total of 10 courses to graduate. To complete your 3 required courses, you must take one course from each of the Theory, Systems and Applications course categories.

Note: each course is 3 credit hours

Applications Courses

This class covers advanced topics in deep learning, ranging from optimization to computer vision, computer graphics and unsupervised feature learning, and touches on deep language models, as well as deep learning for games.

Part 1 covers the basic building blocks and intuitions behind designing, training, tuning, and monitoring of deep networks. The class covers both the theory of deep learning, as well as hands-on implementation sessions in pytorch. In the homework assignments, we will develop a vision system for a racing simulator, SuperTuxKart, from scratch.

Part 2 covers a series of application areas of deep networks in: computer vision, sequence modeling in natural language processing, deep reinforcement learning, generative modeling, and adversarial learning. In the homework assignments, we develop a vision system and racing agent for a racing simulator, SuperTuxKart, from scratch.

What You Will Learn

  • About the inner workings of deep networks and computer vision models
  • How to design, train and debug deep networks in pytorch
  • How to design and understand sequence
  • How to use deep networks to control a simple sensory motor agent

Syllabus

  • Background
  • First Example
  • Deep Networks
  • Convolutional Networks
  • Making it Work
  • Computer Vision
  • Sequence Modeling
  • Reinforcement Learning
  • Special Topics
  • Summary
Philipp Krähenbühl

Philipp Krähenbühl

Assistant Professor, Computer Science

This course focuses on core algorithmic and statistical concepts in machine learning.

Tools from machine learning are now ubiquitous in the sciences with applications in engineering, computer vision, and biology, among others. This class introduces the fundamental mathematical models, algorithms, and statistical tools needed to perform core tasks in machine learning. Applications of these ideas are illustrated using programming examples on various data sets.

Topics include pattern recognition, PAC learning, overfitting, decision trees, classification, linear regression, logistic regression, gradient descent, feature projection, dimensionality reduction, maximum likelihood, Bayesian methods, and neural networks.

What You Will Learn

  • Techniques for supervised learning including classification and regression
  • Algorithms for unsupervised learning including feature extraction
  • Statistical methods for interpreting models generated by learning algorithms

Syllabus

  • Mistake Bounded Learning (1 week)
  • Decision Trees; PAC Learning (1 week)
  • Cross Validation; VC Dimension; Perceptron (1 week)
  • Linear Regression; Gradient Descent (1 week)
  • Boosting (.5 week)
  • PCA; SVD (1.5 weeks)
  • Maximum likelihood estimation (1 week)
  • Bayesian inference (1 week)
  • K-means and EM (1-1.5 week)
  • Multivariate models and graphical models (1-1.5 week)
  • Neural networks; generative adversarial networks (GAN) (1-1.5 weeks)
Adam Klivans

Adam Klivans

Professor, Computer Science

Qiang Liu

Qiang Liu

Assistant Professor, Computer Science

This course focuses on modern natural language processing using statistical methods and deep learning. Problems addressed include syntactic and semantic analysis of text as well as applications such as sentiment analysis, question answering, and machine translation. Machine learning concepts covered include binary and multiclass classification, sequence tagging, feedforward, recurrent, and self-attentive neural networks, and pre-training / transfer learning.

What You Will Learn

  • Linguistics fundamentals: syntax, lexical and distributional semantics, compositional semantics
  • Machine learning models for NLP: classifiers, sequence taggers, deep learning models
  • Knowledge of how to apply ML techniques to real NLP tasks

Syllabus

  • ML fundamentals, linear classification, sentiment analysis (1.5 weeks)
  • Neural classification and word embeddings (1 week)
  • RNNs, language modeling, and pre-training basics (1 week)
  • Tagging with sequence models: Hidden Markov Models and Conditional Random Fields (1 week)
  • Syntactic parsing: constituency and dependency parsing, models, and inference (1.5 weeks)
  • Language modeling revisited (1 week)
  • Question answering and semantics (1.5 weeks)
  • Machine translation (1.5 weeks)
  • BERT and modern pre-training (1 week)
  • Applications: summarization, dialogue, etc. (1-1.5 weeks)
Greg Durrett

Greg Durrett

Assistant Professor, Computer Science

This course introduces the theory and practice of modern reinforcement learning. Reinforcement learning problems involve learning what to do—how to map situations to actions—so as to maximize a numerical reward signal. The course will cover model-free and model-based reinforcement learning methods, especially those based on temporal difference learning and policy gradient algorithms. Introduces the theory and practice of modern reinforcement learning. Reinforcement learning problems involve learning what to do—how to map situations to actions—so as to maximize a numerical reward signal. The course will cover model-free and model-based reinforcement learning methods, especially those based on temporal difference learning and policy gradient algorithms. It covers the essentials of reinforcement learning (RL) theory and how to apply it to real-world sequential decision problems. Reinforcement learning is an essential part of fields ranging from modern robotics to game-playing (e.g. Poker, Go, and Starcraft). The material covered in this class will provide an understanding of the core fundamentals of reinforcement learning, preparing students to apply it to problems of their choosing, as well as allowing them to understand modern RL research. Professors Peter Stone and Scott Niekum are active reinforcement learning researchers and bring their expertise and excitement for RL to the class.

What You Will Learn

  • Fundamental reinforcement learning theory and how to apply it to real-world problems
  • Techniques for evaluating policies and learning optimal policies in sequential decision problems
  • The differences and tradeoffs between value function, policy search, and actor-critic methods in reinforcement learning
  • When and how to apply model-based vs. model-free learning methods
  • Approaches for balancing exploration and exploitation during learning
  • How to learn from both on-policy and off-policy data

Syllabus

  • Multi-Armed Bandits
  • Finite Markov Decision Processes
  • Dynamic Programming
  • Monte Carlo Methods
  • Temporal-Difference Learning
  • n-step Bootstrapping
  • Planning and Learning
  • On-Policy Prediction with Approximation
  • On-Policy Control with Approximation
  • Off-Policy Methods with Approximation
  • Eligibility Traces
  • Policy Gradient Methods
Peter Stone

Peter Stone

Professor, Computer Science

Scott Niekum

Scott Niekum

Associate Professor, Computer Science

Theory Courses

Linear algebra is one of the fundamental tools for computational and data scientists. In Advanced Linear Algebra for Computing, you build your knowledge, understanding, and skills in linear algebra, practical algorithms for matrix computations, and analyzing the effects on correctness of floating-point arithmetic as performed by computers.

What You Will Learn

  • Deciphering a matrix using the Singular Value Decomposition
  • Quantifying and qualifying numerical error
  • Solving linear systems and linear least-squares problems
  • Computing and employing eigenvalues and eigenvectors

Syllabus

  • Norms (1 week)
  • The Singular Value Decomposition (1 week)
  • The QR Decomposition (1 week)
  • Linear Last Squares (1 week)
  • LU Factorization (1 week)
  • Numerical Stability (1 week)
  • Solving Sparse Linear Systems Part 1 (1 week)
  • Solving Sparse Linear Systems Part 2 (1 week)
  • Eigenvalues and eigenvectors (1 week)
  • Practical Solutions of the Hermitian Eigenvalue Problem (1 week)
  • The QR Algorithm Symmetric (1 week)
  • High Performing Algorithms (1 week)
Maggie Myers

Maggie Myers

Lecturer, Computer Science

Robert van de Geijn

Robert van de Geijn

Professor, Computer Science

Modern computational applications often involve massive data sets. In this setting, it is crucial to employ asymptotically efficient algorithms. This course presents techniques for the design and analysis of polynomial-time algorithms. Unfortunately, many optimization problems that arise in practice are unlikely to be polynomial-time solvable. This course presents techniques for establishing evidence of such computational intractability, especially NP-hardness. Even if a given optimization problem is NP-hard, it may be possible to compute near-optimal solutions efficiently. This course presents techniques for the design and analysis of efficient approximation algorithms.

Topics include growth of functions, divide-and-conquer algorithms, dynamic programming, greedy algorithms, basic graph algorithms, network flow, minimum-cost matching, linear program-ming, randomized algorithms, data structures (hashing, amortized analysis, splay trees, union-find, and Fibonacci heaps), online algorithms for paging, P, NP, NP-completeness, and approximation algorithms.

What You Will Learn

  • Techniques for the design and analysis of efficient algorithms
  • Techniques for establishing evidence of computational intractability
  • Techniques for coping with computational intractability

Syllabus

  • Growth of functions, divide-and-conquer algorithms (1 week)
  • Dynamic programming (0.5 weeks)
  • Greedy algorithms (1 week)
  • Basic graph algorithms (0.5 weeks)
  • Network flow algorithms (1.5 weeks)
  • Minimum-cost matching (0.5 weeks)
  • Linear programming (0.5 weeks)
  • Randomized algorithms (0.5 weeks)
  • Hashing (1 week)
  • Amortized analysis, splay trees, union-find, Fibonacci heaps (1.5 weeks)
  • Online algorithms (0.5 weeks)
  • P, NP, NP-completeness (1.5 weeks)
  • Approximation algorithms (1.5 weeks)
Vijaya Ramachandran

Vijaya Ramachandran

Professor, Computer Science

Greg Plaxton

Greg Plaxton

Professor, Computer Science

This is a course on computational logic and its applications in computer science, particularly in the context of software verification. Computational logic is a fundamental part of many areas of computer science, including artificial intelligence and programming languages. This class introduces the fundamentals of computational logic and investigates its many applications in computer science. Specifically, the course covers a variety of widely used logical theories and looks at algorithms for determining satisfiability in these logics as well as their applications.

Syllabus

  • Normal forms; decision procedures for propositional logic; SAT solvers (2 weeks)
  • Applications of SAT solvers and binary decision diagrams (1 week)
  • Semantics of to first-order logic and theoretical properties (1 weeks)
  • First-order theorem proving (1.5 weeks)
  • Intro to first-order theories (0.5 week)
  • Theory of equality (0.5 week)
  • Decision procedures for rationals and integers (1.5 weeks)
  • DPLL(T) framework and SMT solvers (1 week)
  • Basics of software verification (1 week)
  • Automating software verification (2 weeks)
Işıl Dillig

Işıl Dillig

Professor, Computer Science

This course is a first introduction to the theory of quantum computing and information. It covers the rules of quantum mechanics (qubits, unitary transformations, density matrices); quantum gates and circuits; entanglement; the Bell inequality; protocols for teleportation, quantum key distribution, quantum money and other tasks; basic quantum algorithms such as Shor’s and Grover’s; basic quantum complexity theory; and the challenges of building scalable quantum computers. Previous exposure to quantum mechanics is not required.  The prerequisites are linear algebra and some exposure to classical algorithms and programming. The course is mainly theoretical, although it includes limited use of IBM Q Experience to design quantum circuits and run them on real quantum hardware.

What You Will Learn

  • The fundamental concepts of quantum information and computation
  • How basic quantum algorithms and protocols work
  • Informed view of the capabilities and limitations of quantum computers, and the challenges in building them

Syllabus

  • The Church-Turing Thesis and Classical Probability Theory
  • The Basic Rules of Quantum Mechanics
  • Quantum Gates and Circuits
  • The Zeno Effect and Elitzur-Vaidman Bomb
  • Multi-Qubit States and Entanglement
  • Mixed States and the Bloch Sphere
  • The No-Cloning Theorem, Wiesner’s Quantum Money, and BB84 Quantum Key Distribution
  • Superdense Coding and Quantum Teleportation
  • Entanglement Swapping, the GHZ State, and Monogamy of Entanglement
  • Bell’s Inequality
  • Interpretations of Quantum Mechanics
  • Universal Gate Sets
  • Quantum Query Complexity
  • Deutsch-Jozsa Algorithm
  • Bernstein-Vazirani Algorithm
  • Simon’s Algorithm
  • Shor’s Algorithm (Quantum Fourier Transform, Continued Fractions...)
  • Grover’s Algorithm and Applications
  • Quantum Error-Correcting Codes
  • Experimental Realizations of Quantum Computing
Scott Aaronson

Scott Aaronson

Professor, Computer Science

This class has two major themes: algorithms for convex optimization and algorithms for online learning. The first part of the course will focus on algorithms for large scale convex optimization. A particular focus of this development will be for problems in Machine Learning, and this will be emphasized in the lectures, as well as in the problem sets. The second half of the course will then turn to applications of these ideas to online learning.

What You Will Learn

  • Techniques for convex optimization such as gradient descent and its variants
  • Algorithms for online learning such as follow the leader and weighted majority
  • Multi-Armed Bandit problem and its variants

Syllabus

  • Convex sets and Convex functions, including basic definitions of convexity, smoothness and strong convexity
  • First order optimality conditions for unconstrained and constrained convex optimization problems
  • Gradient and subgradient descent: Lipschitz functions, Smooth functions, Smooth and Strongly Convex functions
  • Oracle Lower Bounds
  • Accelerated Gradient Methods
  • Proximal and projected gradient descent. ISTA and FISTA
  • Mirror Descent
  • Frank Wolfe
  • Stochastic Gradient Descent
  • Stochastic bandits with finite number of arms: Explore and commit algorithm, UCB algorithm and regret analysis
  • Adversarial bandits with finite number of arms: Exponential weighting and importance sampling, Exp3 algorithm and variants
  • Multi-armed Bandit (MAB) lower bounds: minimax bounds, problem-dependent bounds
  • Contextual bandits: Bandits with experts — the Exp4 algorithm, stochastic linear bandits, UCB algorithm with confidence balls (LinUCB and variants)
  • Contextual bandits in the adversarial setting: Online linear optimization (with full and bandit feedback), Follow The Leader (FTL) and Follow the Regularized Leader (FTRL), Mirror Descent
  • Online Classification: Halfing algorithm, Weighted majority algorithm, Perceptron and Winnow algorithms (with connections to Online Gradient Descent and Online Mirror Descent)
  • Other Topics: Combinatorial bandits, Bandits for pure exploration, Bandits in a Bayesian setting, Thompson sampling
  • Newton and Quasi-Newton Methods
Constantine Caramanis

Constantine Caramanis

Professor, Electrical & Computer Engineering

Sanjay Shakkottai

Sanjay Shakkottai

Professor, Electrical & Computer Engineering

This class covers linear programming and convex optimization. These are fundamental conceptual and algorithmic building blocks for applications across science and engineering. Indeed any time a problem can be cast as one of maximizing / minimizing and objective subject to constraints, the next step is to use a method from linear or convex optimization. Covered topics include formulation and geometry of LPs, duality and min-max, primal and dual algorithms for solving LPs, Second-order cone programming (SOCP) and semidefinite programming (SDP), unconstrained convex optimization and its algorithms: gradient descent and the newton method, constrained convex optimization, duality, variants of gradient descent (stochastic, subgradient etc.) and their rates of convergence, momentum methods.

Syllabus

  • Convex sets, convex functions, Convex Programs (1 week)
  • Linear Programs (LPs), Geometry of LPs, Duality in LPs (1 week)
  • Weak duality, Strong duality, Complementary slackness (1 week)
  • LP duality: Robust Linear Programming, Two person 0-sum games, Max-flow min-cut (1 week)
  • Semidefinite programming, Duality in convex programs, Strong duality (1 week)
  • Duality and Sensitivity, KKT Conditions, Convex Duality Examples: Maximum Entropy (1 week)
  • Convex Duality: SVMs and the Kernel Trick, Convex conjugates, Gradient descent (1 week)
  • Line search, Gradient Descent: Convergence rate and step size, Gradient descent and strong convexity (1 week)
  • Frank Wolfe method, Coordinate descent, Subgradients (1 week)
  • Subgradient descent, Proximal gradient descent, Newton method (1 week)
  • Newton method convergence, Quasi-newton methods, Barrier method (1 week)
  • Accelerated Gradient descent, Stochastic gradient descent (SGD), Mini-batch SGD, Variance reduction in SGD (1 week)
Sujay Sanghavi

Sujay Sanghavi

Associate Professor, Electrical and Computer Engineering

Constantine Caramanis

Constantine Caramanis

Professor, Electrical & Computer Engineering

Systems Courses

This course focuses on the study of the formal structure, design principles, organization, implementation, and performance analysis of multiprogramming and/or multiprocessor computer systems.

Syllabus

  • CPU Virtualization
  • Memory Virtualization
  • Storage Virtualization
  • Advanced Topics: OS Architecture, Containers, Full-Machine, Virtualization, Heterogeneity
  • Projects: 5 Projects spread across the semester, 3 of which involves developing features on xv6 operating system.
Vijay Chidambaram

Vijay Chidambaram

Associate Professor, Computer Science

Students will study Android APIs and learn to build significant Android applications. The course will have a practical focus, with significant in-class programming, programming assignments and a large project (optionally with a partner). The course philosophy is that programming is learned by doing. While the course focuses on Android, we will learn general principles of software engineering and mobile app development.

The course assumes familiarity with programming and object-oriented terminology. The course is taught entirely in Kotlin, the modern sibling of Java. We will spend a bit of time reviewing Kotlin, but you are expected to be familiar enough with Java that the transition will be seamless. The course does not assume any previous experience with Android programming.

What You Will Learn

  • How to build an Android app, using the latest software technology (e.g., the Jetpack libraries)
  • How to effectively use Android Studio for programming, testing, and debugging
  • General principles of software engineering and mobile app development
  • Programming tools like the git version control system

Syllabus

  • Kotlin introduction
  • GUI widgets and layout
  • Activities and their lifecycle. Implicit and explicit intents
  • ListView and RecyclerView
  • Fragments
  • View models
  • Live data
  • Model-View-View model (MVVM)
  • Network services
  • Firebase authentication and Firestore cloud database
  • Databases and SQL
  • Maps
  • Persistent state, files
Emmett Witchel

Emmett Witchel

Professor, Computer Science

Structure and Implementation of Modern Programming Languages covers the component technologies used in implementing modern programming languages, shows their integration into a system, and discusses connections between the structure of programming languages and their implementations.

What You Will Learn

  • Techniques for program text analysis, including lexing, parsing, and semantic analysis
  • Techniques for machine code synthesis, including code generation and register allocation
  • Machine architectures, both physical and virtual
  • Techniques for run-time actions, including memory management and dynamic linking
  • Mathematical underpinnings, including the grammar-language-automaton triad
  • The impact of individual language features on implementation techniques

Syllabus

  • The compiler-interpreter spectrum
  • Stack virtual machine architecture: SaM, JVM
  • Lexical analysis; regular grammars, finite automata
  • Context-free grammars, pushdown automata
  • Parsing techniques: recursive descent, Earley, others
  • Semantic analysis: type checking
  • Register machine architecture: x86-64
  • Code generation and register allocation
  • Procedure call/return linkage
  • Memory management: explicit, garbage collection
  • Modularity, linking, interoperability
  • Dynamic linking, position-independent code
  • Implementing objects, inheritance, dynamic dispatch
  • Advanced topics: JIT compilation, stack randomization, bootstrapping
Siddhartha Chatterjee

Siddhartha Chatterjee

Professor of Instruction, Computer Science

In modern systems, concurrency and parallelism are no longer niche areas reserved for specialists and experts, but a cross-cutting concern to which all designers and developers are exposed. Technology trends suggest concurrency and parallelism are increasingly a cornerstone subject to which all successful programmers will require significant exposure. The objective of this course is to provide students with strong background on parallel systems fundamentals along with experience with a diversity of both classical and modern approaches to managing and exploiting concurrency, including shared memory synchronization, parallel architectures such as GPUs, as well as distributed parallel frameworks such as MPI and map-reduce.

This course explores parallel systems, from languages to hardware, from large-scale parallel computers to multicore chips, and from traditional parallel scientific computing to modern uses of parallelism. Includes discussion of and research methods in graphics, languages, compilers, architecture, and scientific computing.

Syllabus

  • Basic background/terminology/theory
  • Shared memory synchronization
  • Massively parallel architectures
  • Distributed execution frameworks
  • Runtimes and front-end programming
  • Latency vs. throughput
  • Hidden vs. exposed parallelism
  • Performance issues
  • Parallel algorithms instructive examples
Calvin Lin

Calvin Lin

Professor, Computer Science

Christopher Rossbach

Christopher Rossbach

Associate Professor, Computer Science

This is a course designed to expose students to the latest in virtualization technologies such as virtual machines, containers, serverless, etc. The course also has a significant project component to be completed over the course of the semester. Topics include CPU virtualization, memory virtualization, networking virtualization, storage virtualization, paravirtualization, containers, unikernels, and serverless.

What You Will Learn

  • Basics of Virtual Machines
  • Basics of Containers
  • How CPU is Virtualized
  • How Storage is Virtualized
  • How Network is Virtualized
  • Nested Virtualization
  • Hardware Features Assisting Virtualization
  • Deploying Virtual Machines
  • Orchestrating Containers

Syllabus

  • Basics of Virtual Machines (0.5 week)
  • Virtualizing CPUs and DRAM (1 week)
  • Virtualizing network and storage (1 week)
  • Paravirtualization and Nested Virtualization (1 week)
  • Security in Virtualization (0.5 week)
  • Container basics (1 week)
  • Container orchestration frameworks (0.5 week)
  • Unikernels (1 week)
  • Serverless Computing (1 week)
  • Advanced Topics (2 weeks)
Vijay Chidambaram

Vijay Chidambaram

Associate Professor, Computer Science

Elective Courses

The Case Studies in Machine Learning course presents a broad introduction to the principles and paradigms underlying machine learning, including presentations of its main approaches, overviews of its most important research themes and new challenges faced by traditional machine learning methods. This course highlights major concepts, techniques, algorithms, and applications in machine learning, from topics such as supervised and unsupervised learning to major recent applications in housing market analysis and transportation. Through this course, students will gain experience by using machine learning methods and developing solutions for a real-world data analysis problems from practical case studies.

What You Will Learn

  • Understand generic machine learning (ML) terminology
  • Understand motivation and functioning of the most common types of ML methods
  • Understand how to correctly prepare datasets for ML use
  • Understand the distinction between supervised and unsupervised learning, as well the interests and difficulties of both approaches
  • Practice script implementation (Python/R) of different ML concepts and algorithms covered in the course
  • Apply software, interpret results, and iteratively refine and tune supervised ML models to solve a diverse set of problems on real-world datasets
  • Understand and discuss the contents and contributions of important papers in the ML field
  • Apply ML methods to solve real world problems and present them to mini clients
  • Write reports in which results are assessed and summarized in relation to aims, methods and available data
Junfeng Jiao

Junfeng Jiao

Associate Professor, School of Architecture

We will investigate how to define planning domains, including representations for world states and actions, covering both symbolic and path planning. We will study algorithms to efficiently find valid plans with or without optimality, and partially ordered, or fully specified solutions. We will cover decision-making processes and their applications to real-world problems with complex autonomous systems. We will investigate how in planning domains with finite state lengths, solutions can be found efficiently via search. Finally, to effectively plan and act in the real world, we will study how to reason about sensing, actuation, and model uncertainty. Throughout the course, we will relate how classical approaches provided early solutions to these problems, and how modern machine learning builds on, and complements such classical approaches.

What You Will Learn

  • Defining and solving planning problems
  • Planning algorithms for discrete and continuous state spaces
  • Adversarial planning
  • Bayesian state estimation
  • Decision-making in probabilistic domains

Syllabus

  • Topic 1: Planning Domain Definitions and Planning Strategies (1 week)
  • Topic 2: Heuristic-Guided, and Search-based Planning (2 weeks)
  • Topic 3: Adversarial Planning (2 weeks)
  • Topic 4: Configuration-Space Planning/Sample-Based Planning (2 weeks)
  • Topic 5: Probabilistic Reasoning/Bayesian State Estimation(2 weeks)
  • Topic 7: Markov Decision Processes (1 week)
  • Topic 8: Partially Observable Markov Decision Processes (1 week)
Joydeep Biswas

Joydeep Biswas

Associate Professor

Important Dates

Fall Application

Spring Application

Please note: Applying to UT Austin is a twofold process. We recommend that applicants apply to UT Austin before the priority deadline. This is to ensure their materials are processed in a timely manner.