Master of Data Science (Global) Program
Get a Master of Data Science (Global) from Deakin University and a PG certificate from University of Texas at Austin.
- 24 Months
- Online
- Hands on projects
- 1/10th degree cost

Program in collaboration with:


Earn your masters degree from


Top 1% of Universities globally (QS 2021)


AACSB and EQUIS Accredited

Victorian Government Award 2020
Why choose this program?
A Global Masters Degree
-
Gain the recognition of a global masters degree from an internationally-recognised university
-
Get a global masters degree at 1/10th the cost as compared to a 2-year masters.
Learn from World-Class Faculty
-
Live Virtual Classes by Deakin Faculty
-
Curriculum designed in a modular structure - foundational and advanced competency track
Practical, Hands-on learning
-
Industry sessions and competency courses delivered by experts and faculty at Deakin University
-
Hands-on Projects
Masters from a top 1% of universities ranked globally
Triple crown accreditation AACSB, AMBA, EQUIS
Dedicated Program & Career Assistance by Great Learning

Become industry-ready with mentorship from experts
Master of Data Science (Global) Program from Deakin University
Online | 24 Months
Learning Path
0 month
Step 1
Learners will join the Post Graduate Program by University of Texas at Austin and receive the PG certificate upon program completion.
12 month
Step 2
Post completion of the PG Program from University of Texas at Austin, candidates will continue their learning journey with the 12-month online Master of Data Science (Global) from Deakin University.
24 month
Step 3
Successful learners will receive the Master of Data Science (Global) from Deakin University at 1/10th the cost.
With globally-recognised credentials from leading universities, graduates of the Master of Data Science (Global) from Deakin University become prime candidates for accelerated career progression in the data science field.
Get The Deakin University Advantage
-
Deakin Credentials
Enrolled students get Deakin email ids through which they will be provided access to Deakin's alumni portal.
-
Connect with your Alumni Community
Join over 300,000 Deakin graduates, reconnect and meet with fellow alumni across the globe.
-
Deakin Alumni Discount*
Deakin alumni are eligible to receive a 10% reduction per unit on enrolment fees on any postgraduate award course at Deakin.
*Terms and Conditions apply
Curriculum
The Master of Data Science Program curriculum consists of foundational and advanced competency tracks that enable learners to master advanced data science skill sets effectively.
FOUNDATIONS
In this module, we get your hands dirty with the introduction to Data Science, Statistics, Business Finance, SQL, and Python Programming and some domain-specific expertise. These topics lay the groundwork necessary for you to continue the journey with minimal hindrance.
- Python/R for Data Science
- Introduction to Python/R
- Dealing with Data using Python/R
- Visualization using Python / R
- Python-Markdown
- Missing Value Treatment
- Exploratory Data Analysis using Python/R
In this topic, you will learn every essential using either Python or R programming, including syntax and semantics. These two programming languages are widely implemented in the field of Data Science.
Here, you will be introduced to either Python or R programming concepts, such as data types, variables, loops, functions and more.
Whether you’re dealing with data that is stored in a file or database, Python and R offer easy ways to read, manipulate and output data. Here, you will look at how to use Python and R to deal with data and discuss some of the benefits of using these two languages for data analysis.
Python and R are two of the most popular programming languages for data visualisation as they have excellent libraries for creating beautiful visualisations. In this topic, we'll take a look at how to create visualisations using Python and R.
Here, you will explore Python-Markdown, a library for rendering markdown text into beautifully formatted HTML. It is fast, lightweight, and easy to use, making it an ideal choice for projects that need to generate large amounts of markdown text.
When working with data in Python, it is sometimes necessary to deal with missing values, and Missing Value Treatment is an integral part of data pre-processing. This topic will make you understand how to carefully consider the best method to apply for the data set and the analysis that is being performed.
Here, you will gain familiarity with Exploratory Data Analysis (EDA), a process of systematically working through a dataset to understand its structure, variables and relationships in a better way. EDA is an essential step in any data analysis workflow and can help you gain insights into your data that you might not have otherwise been able to obtain.
- Descriptive Statistics
- Introduction to Probability
- Probability Distributions
- Hypothesis Testing and Estimation Goodness of Fit
Learners are introduced to Descriptive Statistics in this topic, a method of studying data analysis that entails summarising and elucidating diverse data sets.
Here, learners are introduced to Probability, which is used for studying randomness. For example, the possibility of an event occurring in a random/sample experiment.
Probability Distributions, a function in statistics, are used to list all the possible values that a random variable can have within a specific range.
Learners will gain knowledge of statistical hypothesis testing, which is an essential approach in carrying out experiments based on gathered data. They will also gain familiarity with estimating the Goodness of Fit, the measure of how closely a model's predicted values match the actual observed values. The approach measures how well a model fits the data.
- Introduction to DBMS
- ER Diagram
- Schema Design
- Key Constraints & Basics of Normalization
- Joins
- Subqueries Involving Joins & Aggregations
- Sorting
- Independent Subqueries
- Correlated Subqueries
- Analytic Functions
- Set Operations
- Grouping and Filtering
In Database Management System (DBMS), you will learn about where data in your database can be stored, edited, and organised to view the data quickly and easily.
A blueprint that shows the relationship between entities and their attributes is known as an entity-relationship (ER) diagram. Here, you will learn how to create an ER diagram by employing a variety of entities and their characteristics.
The name of the record type, the data type, and other constraints like the primary key and foreign key are all specified in the schema design, which is a schema diagram. Schema design is a logical picture of the complete database.
With a primary key, foreign key, etc., key constraints are utilised to identify an entity within its entity set uniquely. One of the critical ideas of DBMS, which is used to organise data and prevent data redundancy, is Normalisation. In this topic, you will learn the uses of all the key constraints and the fundamentals of Normalisation.
A Join, as its name suggests, is an operation that joins or combines data or rows from several databases depending on the shared fields between them. You will learn about the various kinds of Joins and how to combine data in this subject.
Here, you will explore the process of using subqueries and commands that involve joins and aggregations.
Sorting, as its name suggests, is a method for arranging the information in a particular order to make the provided data easy to grasp. In this topic, you will learn how to sort data in any hierarchy, such as ascending, descending, etc.
An independent subquery is an inner query that is independent of the outer query. You will learn how to use independent subqueries in this topic.
A correlated subquery is an inner query that is dependent on the outer query. You will learn how to use correlated subqueries in this topic.
Here, you will comprehend how to work with an analytic function, which determines values in a set of rows and produces a single result for each row.
A set operation is a process that combines two or more queries into a single result. You will explore a variety of set operators in this topic, including UNION, INTERSECT, and others.
SQL has a feature called Grouping that uses SUM, AVG, and other methods to group the same values together. A powerful SQL technique called Filtering is used to specify a subset of data that adheres to specific requirements.
DATA SCIENCE TECHNIQUES
Moving ahead with the next module of this online Data Science degree program, learners will explore a variety of methods used in Data Science and Analytics that will help them approach any problem.
- Multiple Linear Regression (MLR) for Predictive Analytics
- Logistic Regression
- Linear Discriminant Analysis
A Supervised Machine Learning approach called Multiple Linear Regression involves many data variables used for data analysis. One dependent variable is predicted using a number of independent variables. You will be guided through all the principles of Multiple Linear Regression that are utilised in Machine Learning (ML).
Like Linear Regression, one of the most widely used ML algorithms is Logistic Regression. It is a straightforward classification technique that can predict categorical dependent variables with the aid of independent factors. In this subject, you will be guided through all the principles of Logistic Regression that are utilised in Machine Learning.
Linear Discriminant Analysis (LDA), a dimension reduction method, is used to build machine learning models. You will be guided through all the principles of LDA utilised in Supervised Machine Learning in this subject.
- Analysis of Variance
- Regression Analysis
- Dimension Reduction Techniques
A statistical method used in Data Science called Analysis of Variance, or ANOVA, is used to divide observed variance data into different components for further study and testing. You will learn how to find the crucial differences between the means of two or more groups of data.
The analysis of the relationship between a dependent variable and one or more independent variables through statistical methods is known as Regression Analysis. You will learn about a number of variations in this topic, including linear regression, multiple linear regression and non-linear regression.
Without sacrificing any crucial information, Dimension Reduction converts data from a high-dimensional to a low-dimensional space. You will learn how to use different Dimension Reduction strategies in this subject.
- Introduction to Supervised and Unsupervised Learning
- Clustering
- Decision Trees
- Random Forest
- Neural Networks
A few of the crucial learning algorithms in Machine Learning are supervised and unsupervised learning. In contrast to unsupervised learning models, which are learned using unlabelled data, supervised learning models are trained using labelled data.
Clustering, an unsupervised learning technique, involves the grouping of data. In this module, you will learn various concepts you need to know about the technique and its types, such as hierarchical clustering and K-means clustering, among others.
A supervised machine learning technique called a Decision Tree is applied to classification and regression problems. A Decision Tree is a hierarchical structure where internal nodes represent the dataset features, branches stand in for the decision rules, and each leaf node represents the outcome.
A well-liked supervised learning algorithm in Machine Learning is Random Forest. It includes numerous decision trees on the various subsets of the provided dataset, as the name suggests. The average is then computed to improve the dataset's prediction capability
A deep learning computing system called a neural network is based on the biological neural network of the human brain. You will learn about every application of neural networks in this topic.
- Introduction to Time Series
- Correlation
- Forecasting
- Autoregressive Moving Average (ARMA) Models
- Autoregressive Integrated Moving Average (ARIMA) Models
- Case Studies
Methods for analysing time-series data to get meaningful statistics and other pertinent data are included in time-series analysis. Based on the previously observed values, time-series forecasting is used to forecast future values.
You'll learn how to address correlation-related issues in this topic.
This session will teach you how to gather data, forecast its worth in the future, and identify its distinctive tendencies; this technique is called Data Forecasting.
You will explore the process of forecasting data using the ARMA model.
You will explore the process of forecasting data using the ARIMA model.
Here, you will go through the different kinds of case studies of the topics covered so far.
- Handling Unstructured Data
- Machine Learning Algorithms
- Bias Variance Trade-off
- Handling Unbalanced Data
- Boosting
- Model Validation
You will learn how to train your model using unstructured data in this topic. Unstructured input can be images, text, and more.
You will be guided through every notion of the machine learning algorithms in this lesson and apply them to train your models.
You will be guided through every notion of the Bias Variance trade-off and how to use the property in this lesson.
When the data is not categorised correctly, it is called imbalanced data or unbalanced data. This lesson teaches you how to train your model on unbalanced data.
A machine learning meta-algorithm under the name of "Boosting" creates robust classifiers from a collection of weak classifiers. Gradient boosting, ADA boosting, and adaptive boosting are other categories of Boosting, which you will understand in this topic.
By comparing each model to the requirements, this topic will show you which one best fits architecture.
DOMAIN EXPOSURE
This subsequent module of this Deakin University’s Masters Courses of Data Science will open a door into real-world issues from various domains and instruct students on how to apply the ideas of Data Science and Business Analytics to these issues.
- Overview of ChatGPT and OpenAI
- Timeline of NLP and Generative AI
- Frameworks for understanding ChatGPT and Generative AI
- Implications for work, business, and education
- Output modalities and limitations
- Business roles to leverage ChatGPT
- Prompt engineering for fine-tuning outputs
- Practical demonstration and bonus section on RLHF
- Marketing and Retail Terminologies
- Customer Analytics
- KNIME
- Retail Dashboards
- Customer Churn
- Association Rules Mining
The act of monitoring, controlling, and analysing marketing performance to maximise efficiency and optimise return on investment is called marketing analytics. Retail analytics examines a variety of business metrics, including revenue, consumer demand, inventory levels, and supply chain activity, among others. In this session, you will learn how to use marketing and retail analytics techniques to analyse a company's performance.
Data visualisation, predictive analytics, information management and segmentation are all used in customer analytics to examine how customers influence essential business choices.
The Konstanz Information Miner, known as KNIME, is an open-source platform for data analytics, reporting, and integration that includes several data mining and machine learning components as well as tools for data pretreatment. In this topic, you will learn how to generate and expand data science productivity using KNIME software, allowing you to concentrate on what you do best.
The Retail Dashboard is a tool for monitoring your company's sales and marketing performance. You will learn how to use the tool to track insights in this module.
The percentage of customers that discontinued using a company's goods and services over a predetermined time is known as customer churn. You will learn how to follow the analysis of clients who discontinued doing business with you in this subject.
A technique used in data mining called Association Rules Mining finds connections between variables in sizable databases. You will learn how to spot patterns among variables from this module.
- Why Credit Risk - Using a Market Case Study
- Comparison of Credit Risk Models
- Overview of Probability of Default (PD) Modeling
- PD Models, Types of Models, Steps to Make a Good Model
- Market Risk
- Value at Risk - Using Stock Case Study
- Fraud Detection
Credit risk occurs when a borrower doesn't pay back any loans or debts. You will learn how to handle credit risk in this session.
This session will teach you all the critical comparisons of various credit risk models.
The probability that a borrower won't be able to pay back its debts is estimated by Probability of Default (PD) Modelling.
In this session, you will discover more about PD and other models. Additionally, you'll learn how to construct a good model.
Market risk develops when an investor sustains losses as a result of outside forces influencing market pricing. You will study market risk management in this session.
Value at Risk (VaR) is a statistical technique for calculating the financial risk faced by the organisation over a specific time frame.
Here, you will explore the process of identifying patterns of fraudulent behaviour, why fraud occurs and predict where it is likely to occur in the future, ultimately helping businesses detect and prevent fraud.
VISUALIZATION AND INSIGHTS
The final topics in this prerequisite course of Data Science and Business Analytics are visualisation and insights. This module will show you how to use Tableau to portray data in the best possible ways for rapid and simple insight extraction.
- Introduction to Data Visualization
- Introduction to Tableau
- Basic Charts and Dashboard
- Descriptive Statistics, Dimensions and Measures
- Visual Analytics: Storytelling through Data Dashboards
- Special Chart Types
- Case Study: Hands-on Using Tableau
- Integrate Tableau with Google Sheets
Data visualisation is the technique of graphically representing data and information. You will learn how to use data visualisation tools in this module to show data trends and patterns.
In this session, you will learn all there is to know about Tableau, the most popular tool for problem-solving using data visualisation.
Here, you will learn how to organise data using charts and a Tableau dashboard.
This session will teach you more about Tableau's descriptive statistics, dimensions and measures.
The science of analytical reasoning, known as "visual analytics", is facilitated by engaging visual user interfaces.
In this session, you will learn about several chart types in Tableau, including line charts, bar charts, pie charts, etc.
You will have your hands full in this session as you implement a case study using Tableau.
You will learn how to integrate Tableau on Google Sheets in this session.
PROGRAM CURRICULUM FOR MASTER OF DATA SCIENCE (GLOBAL)
FOUNDATIONS
In this Foundations module, there are two courses where we tackle Statistics and Coding with Python head-on. These two courses lay the groundwork for us to continue the voyage of Artificial Intelligence (AI) and Machine Learning (ML) with as little difficulty as possible.
- Python Basics
- Python Functions and Packages
- Working with Data Structures, Arrays, Vectors & Data Frames
- Jupyter Notebook – Installation & Function
- Pandas, NumPy, Matplotlib, Seaborn
Python is a popular high-level programming language that emphasises readability with straightforward, simple-to-learn syntax. You will learn all the Python programming essentials in this session, and at the end, you will run your first Python application.
For code reuse and software modularity, functions and packages are employed, respectively. You will learn about and use Python's functions and packages for AI with the aid of this session.
One of the most important topics in every programming language is the concept of data structures. For instance, by ranking each player, they aid in the organisation of leaderboard games. Additionally, they support AI and ML in processing speech and images. In this session, you will learn about data structures, including arrays, lists, and tuples, and how to use Python to create vectors and data frames.
Using Jupyter Notebook, you will discover how to apply Python for AI and ML. Using this open-source web tool, we will create and share documents with live code, mathematics, visuals, and text.
After completing this session, you will thoroughly understand data set exploration using Pandas, NumPy, Matplotlib, and Seaborn. These are the most popular Python libraries.
- Data Types
- Dispersion & Skewness
- Uni & Multivariate Analysis
- Data Imputation
- Identifying and Normalizing Outliers
Here, you will explore different data types essential to implementing exploratory data analysis and data processing techniques.
You'll learn how to measure these critical characteristics of data sets, how they can impact the results of your analysis and how to identify and correct them.
Univariate and multivariate analysis are methods used to describe data. They are used to understand how different variables are related to each other and to understand the relationships between variables.
Here, you will explore the process of replacing missing data with substituted values with the help of data imputation.
Here, you will explore the process of identifying and normalising outliers in exploratory data analysis.
- Descriptive Statistics
- Probability & Conditional Probability
- Hypothesis Testing
- Inferential Statistics
- Probability Distributions
Learners are introduced to Descriptive Statistics in this topic, a method of studying data analysis that entails summarising and elucidating diverse data sets.
Here, learners are introduced to Probability, which is used for studying randomness, for example, the possibility of an event occurring in a random/sample experiment. The probability of an event happening, given that many other events have also happened, is known as Conditional Probability.
Learners will gain knowledge of statistical hypothesis testing, which is an essential approach in carrying out experiments based on gathered data.
In this session, you can examine the basic ideas to evaluate theories and estimate data using Python.
Probability Distributions, a function in statistics, are used to list all the possible values that a random variable can have within a specific range.
MACHINE LEARNING
The following module of this Data Scientist Masters Program will introduce us to all the Machine Learning techniques from scratch as well as the often employed Classical ML algorithms that fit into each of the categories.
- Linear Regression
- Multiple Variable Linear Regression
- Logistic Regression
- Naive Bayes Classifiers
- Support Vector Machines
One of the most often used algorithms for predictive analysis in Machine Learning, Linear Regression yields the best results. It is a method that counts on the independent and dependent variables having a linear relationship.
A Supervised Machine Learning approach called Multiple Linear Regression involves many data variables used for data analysis. One dependent variable is predicted using a number of independent variables. You will be guided through all the principles of Multiple Linear Regression that are utilised in Machine Learning.
Like Linear Regression, one of the most widely used ML algorithms is Logistic Regression. It is a straightforward classification technique that can predict categorical dependent variables with the aid of independent factors. In this subject, you will be guided through all the principles of Logistic Regression that are utilised in Machine Learning.
Using Bayes Theorem, classification issues are solved using the Naive Bayes algorithm. In this session, you will learn about the theorem and how to use it to solve issues.
Another well-liked machine learning approach for classification and regression issues is the Support Vector Machine (SVM). Through this session, you will learn how to apply this algorithm.
- Decision Trees
- Random Forests
- Bagging
- Boosting
A supervised machine learning technique called a Decision Tree is applied to classification and regression problems. A Decision Tree is a hierarchical structure where internal nodes represent the dataset features, branches stand in for the decision rules, and each leaf node represents the outcome.
A well-liked supervised learning algorithm in Machine Learning is Random Forest. It includes numerous decision trees on the various subsets of the provided dataset, as the name suggests. The average is then computed to improve the dataset's prediction capability.
A machine learning meta-algorithm called “Bagging”, which is often referred to as bootstrap aggregation, is used to enhance the accuracy and stability of machine learning algorithms used in statistical classification and regression.
A machine learning meta-algorithm under the name of "Boosting" creates robust classifiers from a collection of weak classifiers. Gradient boosting, ADA boosting, and adaptive boosting are other categories of Boosting, which you will understand in this topic.
- K-means Clustering
- Hierarchical Clustering
- Dimension Reduction-PCA
A well-liked unsupervised learning approach for solving clustering issues in Machine Learning or Data Science is k-means clustering. You will study the algorithm's operation in this topic and then put it into practice.
An ML algorithm called hierarchical clustering creates a hierarchy or tree-like structure of clusters. You can use it, for instance, to cluster several unlabeled datasets into one group inside the hierarchical structure. You will learn how to use and implement this algorithm in this topic.
A method to lessen the complexity of a model, such as reducing the number of input variables for a predictive model to prevent overfitting, is Principal Component Analysis (PCA) for Dimension Reduction. In this topic, you will learn several essentials about the widely used Dimension Reduction-PCA approach in Machine Learning.
- Feature Engineering
- Model Selection and Tuning
- Model Performance Measures
- Regularising Linear Models
- ML Pipeline
- Bootstrap Sampling
- Grid Search CV
- Randomized Search CV
- K fold Cross-Validation
Data is transformed through Feature Engineering from its raw condition to one suited for modelling. It transforms the data columns into elements that more clearly depict a particular circumstance. The component's ability to accurately represent an object impacts how well the model can forecast its behaviour. You will learn about several Feature Engineering essentials in this subject.
By comparing each model to the requirements, this subject will show you which one best fits architecture.
You will learn how to use model assessment metrics to optimise the performance of your Machine Learning model in this session.
In this session, you will explore the process of avoiding overfitting and improving model interpretability.
In this subject, you will learn how to use the ML Pipeline to automate Machine Learning workflows. The ML Pipeline can be used by allowing a set of data to be modified and connected in a model that can be tested and evaluated to produce a positive or negative outcome.
By analysing a dataset with replacement, you will gain familiarity with the Machine Learning technique known as "Bootstrap Sampling" to estimate population statistics.
To find the optimal values for any Machine Learning model, hyperparameter tuning is done using the GridSearchCV method. The significance of hyperparameters has a substantial impact on a model's performance. Manually carrying out this procedure is a laborious task. To automate the tuning of hyperparameters, we use GridSearchCV.
Similar to GridSearchCV, RandomizedSearchCV is used to automate the tuning of hyperparameters. For random searches, RandomizedSearchCV is offered, and for grid searches, GridSearchCV is offered.
In Machine Learning, the holdout approach can be improved with k-fold cross-validation. This approach ensures that the performance of our model is independent of the selection of the train and test sets. The holdout approach is applied to each subset of the data set k times, following the division of the data set into k subsets.
- Introduction to Recommendation Systems
- Popularity based model
- Content based Recommendation System
- Collaborative Filtering (User similarity & Item similarity)
- Hybrid Models
- Word Vectorizer
- TF-IDF
As their name implies, Recommendation Systems assist users by predicting their future preferences for certain products and presenting the most appropriate options. You will discover how to employ these methods to guide customers toward the finest products.
Here, you will gain familiarity with the Popularity-based Model, a type of recommendation system that is based on popularity or anything that is currently trending.
First, we explicitly or indirectly gather data from the user. Using this information, we later develop a user profile that will be used to make suggestions to the user. The user increases the system's accuracy by giving us more data or by acting on the recommendation more frequently. The term "Content-based Recommendation System" refers to this technique which we will go through in this topic.
In Collaborative Filtering, many methods are used to find users or products similar to one another in order to provide the best recommendations.
Multiple clustering and classification methods are combined to create a hybrid model. In this section, you will discover how to use a hybrid model.
Here, you will discover how to convert words into numerical vectors with the aid of a technique in Machine Learning called Word Vectorizer. Word Vectorizer allows for applying mathematical and statistical techniques to text data. The technique can be used to improve the performance of Machine Learning algorithms and reduce the dimensionality of the data.
The term frequency of a word in a document is referred to as TF. The simplest method of determining this frequency is to simply count the number of times a word appears in a document.
IDF is a word’s overall inverse document frequency (IDF) over a sample of documents. This conveys the frequency or rarity of a word throughout the entire set of documents. The word is increasingly prevalent when it is closer to 0.
ARTIFICIAL INTELLIGENCE
The following module of this Master of Data Science Program will teach us everything from the basics to moving beyond Machine Learning and into the world of Neural Networks. The next step is to train our models using unstructured data from the usual tabular data, such as text and images.
- Overview of ChatGPT and OpenAI
- Timeline of NLP and Generative AI
- Frameworks for understanding ChatGPT and Generative AI
- Implications for work, business, and education
- Output modalities and limitations
- Business roles to leverage ChatGPT
- Prompt engineering for fine-tuning outputs
- Practical demonstration and bonus section on RLHF
- Mathematical Fundamentals for Generative AI
- VAEs: First Generative Neural Networks
- GANs: Photorealistic Image Generation
- Conditional GANs and Stable Diffusion: Control & Improvement in Image Generation
- Transformer Models: Generative AI for Natural Language
- ChatGPT: Conversational Generative AI
- Hands-on ChatGPT Prototype Creation
- Next Steps for Further Learning and understanding
Dive into the development stack of ChatGPT by learning the mathematical fundamentals that underlie generative AI. Further, learn about transformer models and how they are used in generative AI for natural language.
- Introduction to Perceptron
- Neural Networks
- Activation and Loss functions
- Gradient Descent
- Batch Normalization
- TensorFlow & Keras for Neural Networks
- Hyper Parameter Tuning
Here, you will be introduced to Perceptron, an artificial neuron that is essentially a mathematical simulation of a biological neuron.
Here, you will discover all the uses of Neural Networks, a computing system based on the biological neural network that constitutes the human brain.
A neural network's output is determined by its Activation Function, which takes into account numerous inputs, and the Loss Function is a method for reducing neural network prediction errors. Here, you will explore the application of Activation and Loss Functions.
Finding a function's minima involves an iterative technique called Gradient Descent. Finding the parameters or coefficients of a function's minimum value is done using an optimisation technique. This function occasionally fails to locate a global minimum and can become stuck at a local minimum. In this session, you will discover all there is to know about Gradient Descent.
Here, you will explore the process of Normalisation that involves converting the values of the dataset's numeric columns to a standard scale without distorting the discrepancies between the ranges of values. In Deep Learning, Normalisation is carried out continuously throughout the network as opposed to just once at the beginning. This technique is called Batch Normalisation. A layer's activation function output is normalised before being supplied as input to the following layer.
Google developed TensorFlow, an open-source library for complex machine learning and numerical computation. A powerful open-source API for creating and analysing deep learning models is called Keras. This session will teach you how to execute TensorFlow and Keras from scratch. In Python, these libraries are frequently used for AI and ML.
This session will guide you through every idea related to hyperparameter tuning, an AI training-provided automatic model enhancer.
- Introduction to Convolutional Neural Networks
- Introduction to Images
- Convolution, Pooling, Padding & its Mechanisms
- Forward Propagation & Back propagation for CNNs
- CNN architectures like AlexNet, VGGNet, InceptionNet & ResNet
- Transfer Learning
- Object Detection
- YOLO, R-CNN, SSD
- Semantic Segmentation
- U-Net
- Face Recognition using Siamese Networks
- Instance Segmentation
In this session, you will learn all you need to know about Convolutional Neural Networks (CNN), which are utilised for a variety of tasks, including image segmentation, classification, and processing.
In this session, you will learn how to process an image and extract its data so that it may be used for deep learning image recognition.
In this session, you will discover several techniques in CNN, such as Convolution, Pooling and Padding, and learn about their mechanisms.
The term “Forward Propagation” refers to the process of passing input values through the layers of a neural network until they reach the output layer. The term “Back Propagation” refers to the process of adjusting the weights of the neurons in the hidden layers according to the error in the output layer. These two processes are the basic calculations that are performed to train a neural network.
Here, you will quickly go through the details of popular CNN architectures, such as AlexNet, VGGNet, InceptionNet and ResNet, and how they vary from each other.
Here, you will comprehend Transfer Learning, a research problem in Deep Learning concerned with the storage of knowledge obtained while training one model and its application to another model.
A software program may identify and track down things in an image or video using the Computer Vision approach known as Object Detection. One instance of Object Detection is Face Recognition. This subject teaches you how to use Deep Learning methods to detect any object.
In this subject, you will learn how to detect objects using various Deep Learning algorithms like YOLO, R-CNN and SSD.
In this subject, you will understand semantic segmentation, often referred to as dense prediction in Computer Vision, which aims to assign each pixel of the input image to the appropriate class that corresponds to a particular object or body.
In this subject, you will understand U-Net, a Deep Learning architecture that has been used for image segmentation tasks in Computer Vision. The architecture is based on the encoder-decoder architecture used in convolutional neural networks.
A Siamese Neural Network, also known as a Twin Neural Network, is an artificial neural network that consists of two or more subnetworks that are similar to one another in terms of configuration, parameters and weights. This session will assist you in recognising a person’s face in an image using Siamese Networks.
The goal of object instance segmentation, which advances semantic segmentation, is to differentiate many objects from a single class. Here, you will understand instance segmentation, which is regarded as a hybrid task that combines semantic segmentation and object detection.
- Introduction to NLP
- Stop Words
- Tokenization
- Stemming and Lemmatization
- Bag of Words Model
- POS Tagging
- Named Entity Recognition
- Introduction to Sequential data
- RNNs and its Mechanisms
- Vanishing & Exploding gradients in RNNs
- LSTMs - Long short-term memory
- GRUs - Gated Recurrent Unit
- LSTMs Applications
- Time Series Analysis
- LSTMs with Attention Mechanism
- Neural Machine Translation
- Advanced Language Models: Transformers, BERT, XLNet
By utilising computational linguistics, Natural Language Processing (NLP) creates real-world applications for languages with a variety of structural complexities. With appropriate, effective algorithms, we aim to educate the computer on how to learn languages and then expect it to understand them. The introduction to NLP and all the key ideas you need to understand are covered in this subject.
Stop words do not add anything to the meaning of the sentence and can be eliminated from the sentence without altering its meaning. These words are typically function words such as prepositions, articles and conjunctions. Here, you will discover how to remove stop words in order to reduce the size of the text data set, which in turn, makes processing faster and easier.
Tokenization is the process of breaking down a string of text into smaller pieces called tokens. Tokens can be words, numbers, punctuation marks, or other pieces of text. In this session, you will get familiar with the Tokenization technique used in NLP to split a text into meaningful units that can be analysed.
This session will cover two popular methods in NLP, Stemming and Lemmatization, which make text processing easier and faster by reducing the number of unique words in a text.
Bag of Words is a text modelling method used in Natural Language Processing. To put it technically, it is a method for feature extraction from text data. This method of extracting features from documents is simple and adaptable. In this session, you will learn to keep track of words, ignore grammatical subtleties, word order, etc.
In elementary school, we were taught the distinctions between the many parts of speech tags, such as nouns, verbs, adjectives, and adverbs. POS tagging or POS annotation is the process of assigning each word in a phrase to its appropriate POS (part of speech). Word classes, morphological classes, and lexical tags are other names for POS tags.
Named Entity Recognition, or NER for short, is a standard NLP problem that deals with information extraction. The principal objective is to find and arrange named entities in text into predetermined categories, including names of people, organisations, locations, and events, as well as expressions of time, quantity, monetary values, percentages, and other terms.
A sequence is an organised group of several elements, as the name would imply. This session will teach you how to use the NLP Sequential model to forecast which letter or word will occur.
Recurrent Neural Networks are artificial neural networks that utilise sequential or time-series data. It can be used for speech recognition, image captioning, language translation, and natural language processing. In this session, you will learn about RNNs and discover various mechanisms of RNNs.
This session will familiarise you with the process of vanishing and exploding gradients in RNNs
Here, you will discover how to recognise order dependence in sequence prediction issues using an artificial recurrent neural network called LSTM.
In this session, you will learn about the gating mechanism, called GRU, in RNNs.
In this session, you will learn about all the critical applications of LSTM.
Methods for analysing time-series data to get meaningful statistics and other pertinent data are included in Time Series Analysis. This session will teach you how to forecast future values based on previously observed values with the aid of Time Series Forecasting.
In this session, you will understand how to emphasise the most relevant information when making predictions with the aid of the Attention Mechanism.
In this session, you will comprehend Neural Machine Translation (NMT), a task for machine translation that automatically translates source text from one language to another.
Other popular and sophisticated language models used in NLP will be covered in this session.
SELF PACED MODULE: Introduction to Reinforcement
This module of the Data Science Master’s Degree Program is a self-paced course where you will learn about Reinforcement Learning (RL) and its different components, types and examples. You'll then move on to GANs (Generative Adversarial Networks), where you'll learn about their applications and how to work with them.
- RL Framework
- Component of RL Framework
- Examples of RL Systems
- Types of RL Systems
- Q-learning
We require technical support to make life easier, increase productivity, and make wiser business decisions. We require sophisticated machines in order to accomplish these tasks. While writing programs for straightforward tasks is simple, there needs to be a means to create machines that can handle more complicated jobs. Making devices capable of self-learning is necessary, and here comes the Reinforcement Learning (RL) Framework into the picture.
This session will cover various components of the RL framework.
This session will cover various examples of RL systems.
This session will cover different kinds of RL systems.
In Q-learning, the "Q" stands for quality. It is an off-policy RL system that constantly looks for the optimum course of action given the present situation.
- Introduction to GANs
- Generative Networks
- Adversarial Networks
- How do GANs work?
- DCGANs - Deep Convolution GANs
- Applications of GANs
This topic covers everything related to the introduction of GANs (Generative Adversarial Networks), which are deep generative models.
GANs employ a differential function represented by a neural network called a Generator Network, like the majority of generative models.
Another neural network called the Adversarial Network is also a component of GANs, which you will go through in this topic.
In this topic, you will discover how GANs work in Deep Learning.
In this topic, an example will be used to demonstrate how to use Deep Convolutional GANs, which can function as both a Generator and a Discriminator.
You will discover all the necessary and practical applications of GANs in this topic.
PROGRAM CURRICULUM FOR MASTER OF DATA SCIENCE (GLOBAL)
ENGINEERING AI SOLUTIONS
- Explain the process and key characteristics of developing an AI solution, and the contrast with traditional software development, to inform a range of stakeholders
- Design, develop, deploy, and maintain AI solutions utilising modern tools, frameworks, and libraries
- Apply engineering principles and scientific method with appropriate rigour in conducting experiments as part of the AI solution development process
- Manage expectations and advise stakeholders on the process of operationalising AI solutions from concept inception to deployment and ongoing product maintenance and evolution
MATHEMATICS FOR ARTIFICIAL INTELLIGENCE
- Explain the role and application of mathematical concepts associated with artificial intelligence
- Identify and summarise mathematical concepts and technique covered in the unit needed to solve mathematical problems from artificial intelligence applications
- Verify and critically evaluate results obtained and communicate results to a range of audiences
- Read and interpret mathematical notation and communicate the problem-solving approach used
MACHINE LEARNING
- Use Python for writing appropriate codes to solve a given problem
- Apply suitable clustering/dimensionality reduction techniques to perform unsupervised learning on unlabelled data in a real-world scenario
- Apply linear and logistic regression/classification and use model appraisal techniques to evaluate develop models
- Use the concept of KNN (k-nearest neighbourhood) and SVM (support vector machine) to analyse and develop classification models for solving real-world problems
- Apply decision tree and random forest models to demonstrate multi-class classification models
- Implement model selection and compute relevant evaluation measure for a given problem
MODERN DATA SCIENCE
- Develop knowledge of and discuss new and emerging fields in data science
- Describe advanced constituents and underlying theoretical foundation of data science
- Evaluate modern data analytics and its implication in real-world applications
- Use appropriate platform to collect and process relatively large datasets
- Collect, model and conduct inferential as well predictive tasks from data
REAL-WORLD ANALYTICS
- Apply knowledge of multivariate functions data transformations and data distributions to summarise data sets
- Analyse datasets by interpreting summary statistics, model and function parameters
- Apply game theory, and linear programming skills and models, to make optimal decisions
- Develop software codes to solve computational problems for real world analytics
- Demonstrate professional ethics and responsibility for working with real world data
DATA WRANGLING
- Undertake data wrangling tasks by using appropriate programming and scripting languages to extract, clean, consolidate, and store data of different data types from a range of data sources
- Research data discovery and extraction methods and tools and apply resulting learning to handle extracting data based on project needs
- Design, implement, and explain the data model needed to achieve project goals, and the processes that can be used to convert data from data sources to both technical and non-technical audiences
- Use both statistical and machine learning techniques to perform exploratory analysis on data extracted, and communicate results to technical and non-technical audiences
- Apply and reflect on techniques for maintaining data privacy and exercising ethics in data handling
Faculty

Dr. Kumar Muthuraman
Faculty Director, Centre for Research and Analytics, UT Austin Texas


Mr. R Vivekanand
MBA (Monash University Melbourne Vic.), Operations Director, Wilson Consulting Private Limited


Raghavshyam Ramamurthy
MBA, Whitman School of Management, Industry Expert in Visualization


Dr. Sutharshan Rajasegarar
Senior Lecturer in Computer Science
Course Director Master of Data Science


Prof. Abhinanda Sarkar
Consultant Data Scientist, Compegence, B.Stat, M.Stat - Indian Statistical Institute, Ph.D in Statistics - Stanford University

Prof. Dan Mitchell
Clinical Assistant Professor at The University of Texas at Austin Ph.D, University of Texas at Austin MS - Mathematics, New York University


Dr. Ye Zhu
Senior Lecturer, Computer Science


Dr. Bahareh Nakisa
Lecturer, Applied Artificial Intelligence


Dr. Asef Nazari
Senior Lecturer in Mathematics for Artificial Intelligence


Gang Li
Associate Professor


Dr. Marek Gagolewski
Senior Lecturer, Applied Artificial Intelligence


Maia Angelova Turkedjieva
Professor, Real-World Analytics

Get the Great Learning Advantage
Our career support program will be made available to all the learners of this program.
50%
Average Salary Hike

- 50% Average Salary Hike

Resume Building Sessions
Build your resume to highlight your skill-set along with your previous academic and professional experience.

Interview preparation
Learn to crack technical interviews with our interview preparation sessions.

Career Guidance
Get access to career mentoring from industry experts. Benefit from their guidance on how to build a rewarding career.
Fees and Application Details
Master of Data Science (Global) Program
8,500 USD
Masters Degree from

- Get a Master of Data Science (Global) from Deakin University and a PG certificate from University of Texas at Austin
- Learn from renowned faculty with live and interactive online lectures Become industry-ready with mentorship from experts
- Gain practical skills through project based learning
- Learn alongside a diverse batch of peers for a rich learning experience
- Build your skills with a curriculum designed by leading academicians & industry experts
- Get a global masters degree at 1/10th the cost as compared to a 2-year masters
Application Process
APPLY
Fill out an online application form
GET REVIEWED
Go through a screening call with the Admission Director’s office.
JOIN THE PROGRAM
Your profile will be shared with the Program Director for final selection
Who is this program for?
Applicant must meet Deakin’s minimum English Language requirement.
Candidates should have a bachelors degree (minimum 3-year degree program) in a related discipline OR a bachelors degree in any discipline with at least 2 years of work experience.
Upcoming Application Deadline
Our admissions close once the requisite number of participants enroll for the upcoming batch. Apply early to secure your seats.
Deadline: 20th Jun 2023
Apply NowBatch Start Dates
Online
To be announced
Frequently Asked Questions
The Master of Data Science (Global) is a 24-month online program designed with a modular structure by unbundling the curriculum into foundational and advanced competency tracks, which enable learners to master advanced Data Science skill sets effectively.
The program is designed to equip students with the industry-relevant skills and knowledge required to pursue their careers in the cutting-edge fields of Data Science, Business Analytics.
The online mode of learning will furthermore let the students continue working while upgrading their skills and saving up on accommodation costs. With globally-recognised credentials from leading universities (UT Austin & Deakin University), graduates of the Master of Data Science (Global) from Deakin University become prime candidates for accelerated career progression in the Data Science field.
Students will commence their journey with the PG Program in Data Science and Business Analytics (PGP-DSBA) from the McCombs School of Business at the University of Texas, Austin (UT Austin), in collaboration with Great Lakes Executive Learning for the first 12 months. Upon completing either program, they will continue their learning journey with Deakin University’s 12-month online Master of Data Science (Global) Program.
Deakin University has positioned itself in the top 1% of universities worldwide, as per ShanghaiRankings. According to QS World Rankings, it is ranked as one of the top 50 young universities in the world.
The QS World University Rankings 2021 has ranked UT Austin in 6th position globally in Business Analytics. Outlook.
There are a lot of amazing benefits this course has got to offer you. Here are a few of them:
-
World-Class Faculty: Faculty from Deakin University, UT Austin and Great Lakes come with years of academic and industry experience to impart the latest skills effectively.
-
Hands-on Learning: Learners work on several hands-on projects and case studies to apply Data Science techniques to solve real-world business problems and a capstone project that incorporates all the tools and techniques learned throughout the course.
-
Comprehensive Curriculum: The curriculum is designed with a modular structure by unbundling the curriculum into foundational and advanced competency tracks, which enable learners to master advanced Data Science skill sets effectively.
-
Dual Advantage from World’s Leading Universities: Upon the successful completion of the Master of Data Science (Global) Program, you would receive a PG Certificate from McCombs School of Business at UT Austin and Great Lakes and a Master of Data Science (Global) Degree from Deakin University that adds value to your resumes.
The learning outcomes of this world-class program are as follows:
-
Students will develop a thorough understanding of how to manage the difficulties that can occur when developing Data Science solutions.
-
They will be able to apply mathematical and statistical principles to Data Science applications.
-
They will explore a variety of Data Science techniques and apply them to real-world scenarios.
-
They will be able to comprehend essential Data Science skills utilised in today’s modern world.
-
Students will work with decision-making problems by utilising real-world analytical techniques, along with the technique of Data Wrangling.
The renowned and highly experienced faculty members of UT Austin, Great Lakes and Deakin University will teach you this program and guide you through your lucrative career path in Data Science & Business Analytics.
Yes, students will get a dual advantage from the world’s leading universities and institutes after completing this program. The details are provided below:
PGP-DSBA and Master of Data Science: Suppose a student pursues the PGP-DSBA and Master of Data Science courses. In that case, they will secure Post Graduate Certificates in Data Science and Business Analytics from the University of Texas at Austin and Great Lakes Executive Learning, as well as the Master of Data Science Degree (Global) from Deakin University, Australia.
Yes, you will receive career assistance from Great Learning, a part of BYJU’s group and India’s renowned ed-tech platform for professional development and higher education.
The career support services include:
-
E-Portfolio: The program will help students develop an outstanding E-Portfolio to showcase their expertise to potential employers.
-
Exclusive Job Board: Students will obtain access to Great Learning’s Job Board, where 12000+ organisations approach them with industry-relevant job opportunities with an average salary hike of 50%.
-
Resume Building and Interview Preparation: The program will assist students in building their top-notch resumes to highlight their skills and previous professional experience. They’ll also be able to crack interviews with the interview preparation sessions.
-
Career Guidance: Students will receive career mentorship sessions from several industry experts to build their rewarding careers.
The duration of the program is 24 months, and the student’s results will be shared after 3 months of program completion.
This program follows a cohort-based approach. So, students are required to finish these courses in a specified order and time period.
The eligibility criteria for this program are as follows:
-
Applicants must hold a bachelor's degree (minimum 3-year program) in a related field or a bachelor's degree in any discipline with at least 2 years of professional work experience.
-
The applicants must meet Deakin University’s minimal English language requirement.
No, you are not required to attempt GRE or GMAT tests. The candidates who meet the eligibility criteria are eligible to pursue this program.
The fee to pursue this program is USD 8500, which is 1/10th the cost as compared to a traditional 2-year Master’s degree.
The program fee can be paid by candidates through net banking, credit cards, or debit cards.
There are no refund policies for this program. However, a few exceptional cases are considered at our discretion.
To enrol in this program, the candidates must meet the eligibility criteria mentioned earlier. The admission process for the eligible students is as follows:
-
Step-1: Register through an online application form.
-
Step-2: The admissions committee will analyse each applicant's profile, and those chosen will get an "Offer of Admission."
-
Step-3: By paying the registration fee for the upcoming cohort and submitting all required documents, you can reserve your seat.
Once the required number of participants has signed up for the upcoming batch, our admissions are closed. The first-come, first-serve policy applies to the few seats available for this program. To guarantee your seats, apply early.
Still have queries?
Contact Us
Please fill in the form and a Program Advisor will reach out to you. You can also reach out to us at deakin.mds@mygreatlearning.com or +1 512 890 1269.
Download Brochure
Check out the program and fee details in our brochure
Oops!! Something went wrong, Please try again.
Form submitted successfully
Thank you for reaching out to us. You can expect to hear from us in 1 working day.
Not able to view the brochure?
View BrochureBrowse Related Blogs
Form submitted successfully
We are allocating a suitable domain expert to help you out with program details. Expect to receive a call in the next 4 hours.
Master of Data Science (Global) Program - Deakin University, Australia
Deakin University in Australia offers its students a top-notch education, outstanding employment opportunities, and a superb university experience. Deakin is ranked among the top 1% of universities globally (ShanghaiRankings) and is one of the top 50 young universities worldwide.
The curriculum designed by Deakin University strongly emphasises practical and project-based learning, which is shaped by industry demands to guarantee that its degrees are applicable today and in the upcoming future.
The University provides a vibrant atmosphere for teaching, learning, and research. To ensure that its students are trained and prepared for the occupations of tomorrow, Deakin has invested in the newest technology, cutting-edge instructional resources, and facilities. All the students will obtain access to their online learning environment, whether they are enrolled on campus or are only studying online.
Role of UT Austin and Great Lakes in this Master of Data Science, Deakin University
The University of Texas at Austin’s (UT Austin) McCombs School of Business and Great Lakes Executive Learning have collaborated to develop the following course:
Students must opt for either of the courses mentioned above in the 1st year of this Data Science Master’s Degree (Global) Program. These courses will familiarise them with the cutting-edge fields that are necessary for providing insights, supporting decisions, deriving business insights and gaining a competitive advantage in the modern business world.
In the 2nd year, students will continue their learning journey with Deakin University’s 12-month online Master of Data Science (Global) Program, where they will gain exposure and insights into advanced Data Science skills to prepare for the jobs of tomorrow.
Why pursue the Data Scientist Masters Program at Deakin University, Australia?
The Master's Degree from Deakin University provides a flexible learning schedule for contemporary professionals to achieve their upskilling requirements. Though it's not the sole benefit, you can also:
-
Obtain the acknowledgement of a Global Master's Degree from an internationally-recognised university, Deakin University, along with Post Graduate Certificates from the world’s well-established institutions, UT Austin and Great Lakes.
-
Learn Data Science, Business Analytics from reputed faculties with live and interactive online sessions.
-
Develop your skills by using a curriculum created by eminent academicians and industry professionals.
-
Grasp practical knowledge and skills by engaging in project-based learning.
-
Become market-ready with mentorship sessions from industry experts
-
Learn with a diverse group of peers and professionals for a rich learning experience.
-
Secure a Global Data Science Master’s Degree at 1/10th the cost compared to a 2-year traditional Master’s program.
Benefits of Deakin University Master’s Course of Data Science (Global)
Several benefits are offered throughout this online Data Science degree program, which includes:
-
PROGRAM STRUCTURE
Deakin University's 24-month Master of Data Science (Global) Program is built with a modular structure that separates the curriculum into basic and advanced competency tracks, allowing students to master advanced Data Science skill sets successfully with Business Analytics.
-
INDUSTRY EXPOSURE
Through industry workshops and competency classes led by professionals and faculty at Deakin University, UT Austin and Great Lakes, candidates gain exposure and insights from world-class industry experts.
-
WORLD-CLASS FACULTY
To effectively teach the latest skills in the current market, the faculty members of Deakin University, UT Austin and Great Lakes come with years of expertise in both academics and industry.
-
CAREER ENHANCEMENT SUPPORT
The program provides career development support activities to assist applicants in identifying their strengths and career paths to help them pick the appropriate competencies to study. These activities include workshops and mentorship sessions. Students will also have access to GL Excelerate, a carefully curated employment platform from Great Learning, via which they can apply for appropriate opportunities.
The Advantage of Great Learning in this Deakin University’s Master of Data Science Course
Great Learning is India’s leading and reputed ed-tech platform and a part of BYJU’s group, providing industry-relevant programs for professional learning and higher education. This course gives you access to GL Excelerate, Great Learning's comprehensive network of industry experts and committed career support.
-
E-PORTFOLIO
An e-portfolio illustrates all the skills completed and knowledge gained throughout the course that may be shared on social media. Potential employers will be able to recognise your skills through the e-portfolio.
-
PLACEMENT PROCESS
The course will assist students with a dedicated placement process with reputable employers and domestic/MNC organisations.
-
ACCESS TO CURATED JOB BOARD
Students will acquire access to a list of curated job opportunities that match their qualifications and industry. Great Learning has partnered with 12000+ organisations through the job board, where they shared job opportunities with an average salary hike of 50%.
-
RESUME BUILDING AND INTERVIEW PREPARATION
They assist the students in creating a resume that highlights their abilities and prior work experience. With the help of their interview preparation workshops, they will also learn how to ace interviews.
-
CAREER MENTORSHIP
Acquire access to personalised career mentorship sessions from highly skilled industry experts and take advantage of their guidance to develop a lucrative career in the respective industry.
Eligibility for Master of Data Science (Global) Program
The following are the requirements for program eligibility:
-
Interested candidates must hold a bachelor's degree (minimum 3-year program) in a related field or a bachelor's degree in any discipline with at least 2 years of professional work experience.
-
Candidates must meet Deakin University’s minimal English language requirement.
Secure a Master of Data Science (Global) Degree, along with PGP-DSBA Certificates
Students will be awarded three certificates from the world’s leading and reputed institutes:
PGP-DSBA and Master of Data Science (Global)
-
Post Graduate Certificate in Data Science and Business Analytics - The University of Texas at Austin
-
Post Graduate Certificate in Data Science and Business Analytics - Great Lakes Executive Learning
-
Master of Data Science Degree (Global) from Deakin University, Australia
Hey there! Welcome back.
You are already registered. Please login instead.
Forgot your password? No problem.
You are already registered. Please login instead.
Login
Forgot Password?
Enter your registered email and we'll send you a link to change your password.