Venkatesh Tadinada has 25+ years of experience in various domain and verticals that deal with Data. His journey started with Data Warehouses, proceeding on to Data Mining, Business Intelligence and now Machine Learning, Deep Learning & AI. Co-founded and exited a start-up in Business Intelligence.
He drives his passion for teaching through workshop on Machine Learning. Below show are few of the recent hands-on workshops where he taught Data Enthusiasts the joy of Discovering the Data.
This session gives introduction for what machine learning is and where can machine learning be applied.This also provides an overview of various types of machine learning.This also provides a path on where to start for establishing a career in machine learning and the process that has to be followed to make predictions on a machine learning problem.
This session gives introduction for what machine learning is and where can machine learning be applied.This also provides hands on experience on python packages like numpy, pandas, matplotlib and seaborn necessary for data analysis and machine learning using python.
Logistic Regression is a statistical method for analyzing a dataset in which there are one or more independent variables that determine an outcome. The outcome is measured with a dichotomous variable (in which there are only two possible outcomes). You would be able to use Logistic Regression algorithms to determine if passengers on Titanic survived or died based on their attributes.
Linear Regression is a standard Statistical Data Analysis technique which is used to determine the extent to which there is a linear relationship between a dependent variable and one or more independent variables. Random Forests are an ensemble learning method for classification and regression. You will get to solve a problem which is asked in HackerEarth's Brain Waves Competition using the above two algorithms.
XGBoost Algorithm has become the ultimate weapon of many data scientists. It’s a highly sophisticated algorithm, powerful enough to deal with all sorts of irregularities of data and is capable to perform parallel computation. Participants will get to know about XGBoost, Feature Engineering Using XGBoost and Optimization of hyperparameters using XGBoost through hands on experience.
Most of the data in today’s world is being generated as we speak, as we tweet, as we send messages on social media and in various other activities. Working on this data has become necessary to derive useful patterns which will help in serving the customers better and also helps in business growth. Participants will be provided hands on experience on natural language processing tools along with end to end walk-through on how to classify the twitter comments using the NLTK Tools.
We predict the future using the observations from the present and give equal weightage to all the observations. It is necessary to give importance to certain observations over another when dealing with time dependent data and also when there are seasonal trends in the data. So this workshop will focus on coverage of all techniques necessary to handle Time Series Data.
A high level overview of what machine learning is and to recognize the problems that can be solved with Machine Learning.
Select the right technique to solve the problem (is it a classification problem? a regression? needs preprocessing?).
A few libraries like Numpy, Pandas, Scikit-Learn, to start your learning experience, that you can improve upon.
A certificate for succesful completion of the course.
Finally, you would be able to decide if Machine Learning is for you and what you would have to do next, if you want to make this your career.