FREE Machine Learning Master Class (PYTHON) 30 Day Hackathon with Full Hands-on Build What’s Next!!!
The Objective of this machine learning master class is to facilitate the participants to get cognizance of the concepts dealt with for substantial utilization of the same into studying, teaching, Research work, and Upgrading. 📚
What you will Learn?
✅Day-1: Overview A.I | Machine Learning – DOWNLOAD YOUR DAY 1 LECTURE NOTES
✅Day-2: Introduction to Python | How to write code in Google Colab, Jupyter Notebook, Pycharm & IDLE
SUPERVISED LEARNING – CLASSIFICATION & REGRESSION
✅Day-3: Advertisement Sale prediction from an existing customer using
LOGISTIC REGRESSION
✅Day-4: Salary Estimation using K-NEAREST NEIGHBOR
✅Day-5: Character Recognition using SUPPORT VECTOR MACHINE
✅Day-6: Titanic Survival Prediction using NAIVE BAYES
✅Day-7: Leaf Detection using DECISION TREE
✅Day-8: Handwritten digit recognition using RANDOM FOREST
✅Day-9: Evaluating Classification model Performance using CONFUSION
MATRIX, CAP CURVE ANALYSIS & ACCURACY PARADOX
✅Day-10: Classification Model Selection for Breast Cancer classification
No Attachments for Day 10
✅Day-11: House Price Prediction using LINEAR REGRESSION Single Variable
✅Day-12: Exam Mark Prediction using LINEAR REGRESSION Multiple Variable
No Attachments for Day 12
✅Day-13: Predicting the Previous salary of the New Employee using
POLYNOMIAL REGRESSION
✅Day-14: Stock price prediction using SUPPORT VECTOR REGRESSION
✅Day-15: Height Prediction from the Age using DECISION TREE REGRESSION
✅Day-16: Car price prediction using RANDOM FOREST
✅Day-17: Evaluating Regression model performance using R-SQUARED
INTUITION & ADJUSTED R-SQUARED INTUITION
✅Day-18: Regression Model Selection for Engine Energy prediction.
UNSUPERVISED LEARNING – CLUSTERING
✅Day-19: Identifying the Pattern of the Customer spent using K-MEANS
CLUSTERING
✅Day-20: Customer Spending analysis using HIERARCHICAL CLUSTERING
✅Day-21: Leaf types data visualization using PRINCIPLE COMPONENT
ANALYSIS
✅Day-22: Finding Similar Movie based on ranking using SINGULAR VALUE
DECOMPOSITION
UNSUPERVISED LEARNING – ASSOCIATION
✅Day-23: Market Basket Analysis using APIRIORI
✅Day-24: Market Basket Optimization/Analysis using ECLAT
REINFORCEMENT LEARNING
✅Day-25: Web Ads. Click through Rate optimization using UPPER BOUND
CONFIDENCE
Natural Language Processing
✅Day-26: Sentimental Analysis using Natural Language Processing
Day-27: Breast cancer Tumor prediction using XGBOOST
DEEP LEARNING
✅Day-28: Pima-Indians Diabetes Classification
✅Day-29: Covid-19 Detection using CNN
✅Day-30: A.I Snake Game using REINFORCEMENT LEARNING
Learn Faster & Easier than You Think 🚀 🔥
DOWNLOAD YOUR DAY 1 LECTURE NOTES
Send download link to:
DOWNLOAD YOUR DAY 2 LECTURE NOTES
Check your spam box if mail not received
Send download link to:
DOWNLOAD YOUR DAY 3 LECTURE NOTES
Check your spam box if mail not received
Send download link to:
DOWNLOAD YOUR DAY 4 LECTURE NOTES
Check your spam box if mail not received
Send download link to:
DOWNLOAD YOUR DAY 5 LECTURE NOTES
Check your spam box if mail not received
Send download link to:
DOWNLOAD YOUR DAY 6 LECTURE NOTES
Check your spam box if mail not received
Send download link to:
DOWNLOAD YOUR DAY 7 LECTURE NOTES
Check your spam box if mail not received
Send download link to:
DOWNLOAD YOUR DAY 8 LECTURE NOTES
Check your spam box if mail not received
Send download link to:
DOWNLOAD YOUR DAY 9 LECTURE NOTES
Check your spam box if mail not received
Send download link to:
DOWNLOAD YOUR DAY 11 LECTURE NOTES
Check your spam box if mail not received
Send download link to:
hi
Day 15 Hackathon
print(model.predict(([[99]]))[0])
Result: 11.0
DAY 12 HACKATHON
a=[[120,10000,0,7,6,733,0]]
Predictedmodel = model.predict(a)
print(Predictedmodel)
OUTPUT : [156869.35117057]
datasample=[(160,3951,1,6,5,621,0)]
modelresult=model.predict(datasample)
print(modelresult)
Output : [164500.65369712]
a=[[ 14, 8000,0,6,8,1404,0,0,1,0,0,0]]
PredictedmodelResult = model.predict(a)
print(PredictedmodelResult)
OUTPUT : [252799.81347602]
input :
b=[[60,8450,0,7,5,856,0]]
PredictedmodelResult = model.predict(b)
print(PredictedmodelResult)
output
[204111.05057904]
Day 12 Practice
datasample=[(30,6000,1,7,8,1150,5)]
modelresult=model.predict(datasample)
print(modelresult)
Output : [221291.70673548]
import pandas as pd #useful for loading the dataset
import numpy as np #to perform array
from matplotlib import pyplot
from sklearn.linear_model import LinearRegression
from google.colab import files
uploaded = files.upload()
dataset = pd.read_csv(‘practiceDataset.csv’)
print(dataset.shape)
print(dataset.head(5))
dataset[‘Street’] = dataset[‘Street’].map({‘Pave’: 0, ‘Grvl’: 1}).astype(int)
print(dataset.head)
dataset[‘Heating1’] = dataset[‘Heating’].map({‘GasA’: 1, ‘GasW’: 0, ‘Wall’: 0, ‘Grav’: 0, ‘OthW’: 0, ‘Floor’: 0}).astype(int)
dataset[‘Heating2’] = dataset[‘Heating’].map({‘GasA’: 0, ‘GasW’: 1, ‘Wall’: 0, ‘Grav’: 0, ‘OthW’: 0, ‘Floor’: 0}).astype(int)
dataset[‘Heating3’] = dataset[‘Heating’].map({‘GasA’: 0, ‘GasW’: 0, ‘Wall’: 1, ‘Grav’: 0, ‘OthW’: 0, ‘Floor’: 0}).astype(int)
dataset[‘Heating4’] = dataset[‘Heating’].map({‘GasA’: 0, ‘GasW’: 0, ‘Wall’: 0, ‘Grav’: 1, ‘OthW’: 0, ‘Floor’: 0}).astype(int)
dataset[‘Heating5’] = dataset[‘Heating’].map({‘GasA’: 0, ‘GasW’: 0, ‘Wall’: 0, ‘Grav’: 0, ‘OthW’: 1, ‘Floor’: 0}).astype(int)
dataset[‘Heating6’] = dataset[‘Heating’].map({‘GasA’: 0, ‘GasW’: 0, ‘Wall’: 0, ‘Grav’: 0, ‘OthW’: 0, ‘Floor’: 1}).astype(int)
print(dataset.head)
X = dataset.iloc[:,[1,4,5,17,18,38,-6,-5,-4,-3,-2,-1]].values
print(X.shape)
X[1]
Y = dataset.iloc[:, -7].values
Y
model = LinearRegression()
model.fit(X,Y)
a=[[ 20, 9600,0,6,8,1262,1,0,0,0,0,0]] #Data Format: MSSubClass, LOTArea, Street, Overall Quality, Overall Cond, TOTAL BSMT, Heating GasA, Heating GasW, Heating Wall, Heating Grav, Heating OthW, Heating Floor
PredictedmodelResult = model.predict(a)
print(PredictedmodelResult)
a=[[ 20, 9600,0,6,8,1262,1,0,0,0,0,0]] #Data Format: MSSubClass, LOTArea, Street, Overall Quality, Overall Cond, TOTAL BSMT, Heating GasA, Heating GasW, Heating Wall, Heating Grav, Heating OthW, Heating Floor
PredictedmodelResult = model.predict(a)
print(PredictedmodelResult)
[190130.66855158]
❤
I tried to download day 1 lecture notes. it asks for my email. i tried giving my 2 mail IDs but didnot receive any mail. really you are sharing the material of is just a fake?
Please Check ur SPAM Folder sir in ur mail
Hello boss Sanjay,
Very nice class boss. But I request you to share the notes and study material in WhatsApp group and telegram groups. Also your dedication and zeal is marvelous.
Hope you get my point. By this you will be donating your knowledge to poor people who can not afford.
Alot more to go and teach.
With Regards
Radha Krishna chilukalapalli
Good morning ,sanjay sir 😃
Mind blowing class ,but sanjay sir apni speed bhut tej thi ,kafi bate skip bhi hui ,but I am so excited your next class ….. Paython 😍😍
Sanjay sir class j notes kha milenge …
Learning something new with this ML master class
LD: 0.986047 (0.023716)
KNN: 0.964839 (0.021656)
CART: 0.932060 (0.031854)
NB: 0.943798 (0.031560)
SVM: 0.979014 (0.021946)
LDA: 0.953101 (0.017949)
i tried it sanjay , thanks for teaching us
LR: 0.985000 (0.018371)
LDA: 0.952310 (0.019946)
KNN: 0.952278 (0.009290)
CART: 0.917089 (0.006093)
NB: 0.942278 (0.023063)
SVM: 0.974968 (0.022343)
Thank you Sanjay!
LR: 0.955592 (0.030253)
LDA: 0.953212 (0.034535)
KNN: 0.929734 (0.040287)
CART: 0.910853 (0.031071)
NB: 0.932115 (0.035155)
SVM: 0.610410 (0.007053)
Always great to interact with these techie legends. Education and sharing knowledge! that’s what makes us humans. Doing great job to encourage upcoming generations. I can imagine a smile if a person gains some knowledge through your concepts. God bless and be happy and wealthy.
LR: 0.953816 (0.020818)
LDA: 0.953816 (0.020818)
KNN: 0.953816 (0.020818)
CART: 0.953816 (0.020818)
NB: 0.953816 (0.020818)
SVM: 0.953816 (0.020818)
LR : 94.48076923076924% mean , (2.906957635672559%) std
CART : 92.23717948717947% mean , (5.290885225501307%) std
KNN : 92.21794871794872% mean , (3.4258847602528215%) std
NB : 93.97435897435898% mean , (3.3839840125954543%) std
SVM : 62.82051282051283% mean , (0.6410256410256432%) std
LDA : 94.9871794871795% mean , (2.958289926619907%) std
LR: 0.947285 (0.027702)
LDA: 0.953130 (0.029324)
KNN: 0.927677 (0.021649)
CART: 0.915950 (0.043777)
NB: 0.931523 (0.033237)
SVM: 0.626961 (0.004410)
mean and std
LR: 0.947343 (0.032476)
LDA: 0.956087 (0.013680)
KNN: 0.927488 (0.038094)
CART: 0.927488 (0.024244)
NB: 0.940773 (0.029156)
SVM: 0.637440 (0.007005)
Accuracy of LR , standard Deviation of LR
0.94845 0.03586
Accuracy of LDA , standard Deviation of LDA
0.95775 0.03096
Accuracy of DT , standard Deviation of DT
0.95089 0.02186
Accuracy of KNN , standard Deviation of KNN
0.91573 0.04917
Accuracy of GNB , standard Deviation of GNB
0.93909 0.03623
Accuracy of RFC , standard Deviation of RFC
0.95548 0.02198
Accuracy of SVM , standard Deviation of SVM
0.92254 0.04443
Thank you so much for your intense work, your sessions are really awesome. sanjay and jeevarajan sir are very interactive.
Accuracy of LR , standard Deviation of LR
0.94845 0.03586
Accuracy of LDA , standard Deviation of LDA
0.95775 0.03096
Accuracy of DT , standard Deviation of DT
0.95089 0.02186
Accuracy of KNN , standard Deviation of KNN
0.91573 0.04917
Accuracy of GNB , standard Deviation of GNB
0.93909 0.03623
Accuracy of RFC , standard Deviation of RFC
0.95548 0.02198
Accuracy of SVM , standard Deviation of SVM
0.92254 0.04443
This is so great I think many people will benefit learning from you. sanjay and jeevarajan sir are very interactive. I hope you reach your stipulated goals sooner and Thank you very much for your intense work.
LR: 0.950831 (0.036608)
LDR: 0.957863 (0.020150)
DTC: 0.927187 (0.032254)
KNN: 0.927187 (0.046122)
GNB: 0.939037 (0.033207)
SVM: 0.901495 (0.032657)
LR: 0.975942 (0.018042)
LDA: 0.953816 (0.020818)
KNN: 0.962754 (0.027579)
CART: 0.951739 (0.033565)
NB: 0.932029 (0.031328)
SVM: 0.969324 (0.029562)
for yesterday’s assignment using breast cancer data finding machine learning algorithms outputs for the given data is:
LR: 0.981285 (0.025173)
LDA: 0.957863 (0.020150)
KNN: 0.964839 (0.018995)
NB: 0.941417 (0.027918)
CART: 0.915615 (0.045433)
SVM: 0.979014 (0.021946)
I’m very thankful to you Sanjay Sir. You are doing a great job. Your teaching style is so frankly. And due to this, now I got confidence in python programming as well as Machine Learning. Thank you so much!!
LR: 94.844961% mean, (3.586181)% std
LDA: 95.775194% mean, (3.095733)% std
KNN: 91.572536% mean, (4.917043)% std
CART: 95.310078% mean, (2.086987)% std
NB: 93.909192% mean, (3.623226)% std
SVM: 92.253599% mean, (4.443324)% std
In SVM model, I have used the gamma parameter ‘scale’ instead of ‘auto’ , because by using ‘auto’ , the accuracy was too less.
LR : 94.48076923076924% mean , (2.906957635672559%) std
CART : 92.23717948717947% mean , (5.290885225501307%) std
KNN : 92.21794871794872% mean , (3.4258847602528215%) std
NB : 93.97435897435898% mean , (3.3839840125954543%) std
SVM : 62.82051282051283% mean , (0.6410256410256432%) std
LDA : 94.9871794871795% mean , (2.958289926619907%) std
Really very much interested topics you are covering, but while listening audio is resounding and not clear.
The way of introducing the concepts and summary in terms of mindmaps are excellent