Lets get stared with Python

I always wanted to master one programming language ( just as hobby). Programming can make your life facile by automating all kind of task. I had learned C during my college but only to pass exam --which I did. Than never looked back to it.
During my professional life I had use R. So, I have excellent knowledge of R (never felt the need to learn other language) but R is mostly for data analysis and data science (that is what I do). Sometime ago, while I was doing some research on data science,  I found that R is not enough, if I wanted to be bad-ass data scientist, I had to learn python.
Learning python is trickier than R, as there are so many thing to learn, so many branches not like R (learn concept of data frame and drive into it). I want to learn python for both data science and as my hobby. This made my learning more trickier. After long research and my person experience, here are steps, that work best and resources.

Step for understand python from basic:
1. Try_python -- basic --  take this course at first
2. Programming Foundations with Python -- basic-- -- best course for stater if you want to understand from basic programming but lack info about loop, data-type.
3. Python--Codecademy -- basic to advance -- beautifully design, you will learn by doing.

After taking these course you can start making project as you like, if you want to drive deep into data
2. How to get better at data science -- best blog on this topic.

Need books
1. It ebooks -- free download -- buy books from amazon if you can effort. Support author.

Free online courses (just search python in these website)
1. Udacity -- Most interactive and interesting courses -- first time programmer try introductory course-- you can progress on your pace.
2. Edx--  Interactive but can be boring and fix schedule( you can archive course and take latter too)-- have course from basic to advance.
3. Coursera  -- first time programmer don't try this --most of courses are boring if you don' t have enthusiasm.
* This is my personal view, may be coursera has very interesting and interactive course but I always toke boring class or udacity may have boring class too. See all three website and decide yourself.

Install  python and package can be tricky too
My suggestion is used Anaconda distribution
  -- It comes with more than 200+ popular packages install along with jupyter notebook( ipthyon                 notebook) and spyder(IDE for python)
 -- Another advantage you can install both 2.7 and 3.4 using one click.
       conda create -n python2 python=2.7 anaconda

       Activate environment 
       activate python2
 -- You can also install R and use R in jupyter notebook with single line of code

       conda create -n my-r-env -c r r-essentials

Need help:
3. Google -- ha ha 

Remember if you have to do same task again and again, always find a way to automate it. 
Happy coding :-)

Good coding format and Practices in R

There are many recommended coding standard and layout. A badly written code is big pain for anyone reader. So its always better to have good format of coding and follow few standard. My favorite layout of coding is described below:
  • Always start your code with description because when you write many code, names can also be confuse. Code description should have good name followed by what it does than files need for running the code. This will save you and others lot of time in long run.
######################Daily_mail and dispatch_cockpit###############################
#######open VPN Client ######
##Send a mail to all seller manager and make output for dispatch cockpit
#delisted file from BI, order from BOB
  • Than always load all packages (this will make it easy to see what packages are need to run code when you share file) need for analysis, always used suppressPackageStartupMessages function, it make output elegance. 
#################load required package
suppressPackageStartupMessages(require("dplyr"))
suppressPackageStartupMessages(require("mailR"))
suppressPackageStartupMessages(require("lubridate"))
suppressPackageStartupMessages(require("htmlTable"))
suppressPackageStartupMessages(require("googlesheets"))
currentDate = Sys.Date() ##current date to make folder and use in file name

  • Set up directory of R to folder that has all input file. If you are running R code on daily basis for any repetitive task, always have separate folder for input and output ( for output you can  have new folder of each day and keep input there)
#set input to require directory
setwd("M:/R_Script")
filepath=getwd()
setwd(paste(filepath, "Input", sep="/"))

  • If you can always, import all file at start of analysis.

seller = read.csv("sellers_delisting.csv", stringsAsFactors = F)
order = read.csv2("order.csv")

  • While writing code, if you are reading heavy file or from database always make a copy of original file and keep it separate while you progress ( like say I imported file bob than make copy of bob and do all analysis on copy of bob) as while writing code you will make mistake and if you again have to import original file, its tedious. 

order_new = order

  • If you are making many subset of data, give it same name always like "temp" for subset and some relevant name for summary of subset.
temp = subset(seller, seller$Date.delisted> as.Date(Sys.Date())-30 &
seller$Status =="Delisted", select = c("Seller.Name", "Reason.for.delisting"))
#summarize
seller_delisted = table(temp$Seller.Name.,temp$Reason.for.delisting)

  • When you save output always save it in output or today's folder with date in file name. Its will save you from lot of confusion.



#Save the the file
setwd("M:/Daily/Daily")
dir.create(as.character(currentDate)) #new folder with name current date
setwd(paste("M:/Daily/Daily", currentDate, sep="/"))
csvFileName1 = paste("Threshold limit and seller delisted",currentDate,".csv",sep=" ") #File name with date
write.csv(seller_delisted, file=csvFileName1, row.names = F)

  • When you save code that need further fine tuning always use git to commit or use Version in file name.  Like text_v1. R than text_v2.R so on.
  • If your are running multiple code one after another, always remove all variable from R once single analysis is completed. So that there is no interference of old variable with new code variable. 
rm(list=ls())
Now you ready to write lucid code.

Vlookup in R

First thing that was on my mind when I used R for first time, How the hell will I use Vlookup in R?(All my report had vlookup at least one time) . I googled it, answers were not lucid. If you google it most probability you will come across use merge as answer. Merge is base function, like most base function(except very few) it complected to use. Plus excel user are not that familiar with relationship, for them info in each cell are different. Excel user never think data as column, info in each cell is separate for them. We (excel user) thinking about how we will add two cell, how will we look value of cell A1 on table B1:C10. We never think as lets look value of column A into table B:C. or add column A to B.
Advice: If you come from excel background start thinking all data as column  and starts respecting the structure of data. In excel you can add any two cells (A1 and A5) and put that somewhere in  C5, have different type of data in one column(like number in A1, date A2, string in third ). This is very bad habit . Always think any operation as column operation not cells operation. Like if you have to add two series, put it under different column and add these to make third column. Any analysis, reporting, manipulation only consists of joining column and than summarizing(visualization, modeling). Now when some ask me for analysis, I just have to know where are column with these info, how can I  join them, how to summarize, that all there is in any reporting.
Why am I taking so much about column in vlookup tutorial?  Reason is, in any database language or programming language for Vlookup, you need to get related info about these column from next column and both of these column should have common id.

Lets break down Vlookup,
Vlookup - takes a value say "A" than find that value "A" in next table than pull info related to "A" from  this table.
This is called joining in database and R, you take list of value, join(match these value in next table) than pull info related to these value.

lets take an example 
##make data frame
master
<- data.frame(ID = 1:50, name = letters[1:50],
date = seq(as.Date("2016-01-01"), by = "week", len = 50))

Now we have different list which only has id
##lookup value
lookup
=data.frame(id = c(23, 50, 4, 45))

Now we need to look up name of these id in master data.frame.
Merge?? i have not used it for ages there are easy solution  for it.
##load dplyr
required(dplyr)

dplyr has many user friendly join function.



lets get back to problem
##lookup
id_lookup
= left_join(id, master, by="id") # output are only value that >matches to id_lookup, if no match is found it return as NA
or
id_lookup = right_join(master, id, by="id") ##both column should have common name

If column name are different you can
##If column name are different you can
id_lookup
= right_join(master, id, by=c("id"="id2"))

or rename column using
colnames(id)[x] = "id" # x is cloumn index
id_lookup = rename(id, id=id2) # rename is dplyr function

New id_lookup will have colnames as "id","name","date". If you don't need date you can always make subset of data,frame and get only required data. Or before join make subset of master with only required column and than join. Any way you like.

##subset of data
id_lookup
= id_lookup[ , -c("date")]
or
id_lookup = id_lookup[ , c("id", "date")
or
id_lookup = id_lookup[,c(1,3)]
or
id_lookup = subset(id_lookup, condition, select=c("id", "date"))
Cautious: make sure name are same for similar  field, not like column names is id obs. are names. There is were respect for database structure comes.

Get used to with joins, these are all joins you we need to perform any lookup. You never perform look for only particular value mostly, its always column look up. Best practices is always make data.frame of what you have to look up and  join to next table.


WBS- Part3 - lappy to make Individual file of Each Topic

WBI is large set of data. In last two blog we discussed how to do cleaning and manipulation along with make beautiful visualization.
But some time, we don't have enough resources or its gets boring to run same code time again and again. What if we had data that was cleaned (ready for any analysis) and arrange topic wise so that we can easily access when we need, without having to do all dirty work in R. May be you don't have R in other computer and want to run some analysis on tableau or excel on any particular topic. Its always good to have cleaned data.
We will take big chunk of data and do all cleaning and manipulation than produce csv for each topic and save it for further access.

Lets get started
First part is similar  to old tutorial so I will just paste the code there.

###download world bank data "http://data.worldbank.org/products/wdi" 
#>> "Data catalog downloads (Excel | CSV)">> "CSV"
##unzip and keep in directory of your choice my is "M:/R_scripts/Combine"
#################load required package

##if (!require("dplyr")) install.packages('dplyr') 
# if you are not sure if package is installed
suppressPackageStartupMessages(require("dplyr"))
suppressPackageStartupMessages(require("tidyr"))
suppressPackageStartupMessages(require("reshape2"))
suppressPackageStartupMessages(require("readr"))
suppressPackageStartupMessages(require("googleVis"))
currentDate = Sys.Date()

#########Set the file directory
setwd("M:/")
filepath=getwd()
setwd(paste(filepath, "R_Script/Combine", sep="/"))

#####readfile from your directory
wdi = read_csv("WDI_Data.csv")
country = read_csv("WDI_Country.csv")
i_name= read_csv("WDI_Series.csv")

#### create subset of above data, select only required row
## required col from wdi
wdi_sub = wdi[ , c(1,3,5:60)]

##lets run anysis on country name only; 
#country name in wdi file has other names like summary of region
country_sub = subset(country, country$`Currency Unit`!="" ,
select = c("Table Name", "Region")) # if currency unit is blank its not country
colnames(country_sub) <- c("Country Name", "Region")

Now lets make list of all Topic from i_name

# lets make list of topic
i_name_sub = as.data.frame(table(i_name$Topic))
i_name_sub = as.character(i_name_sub[,1])
Now we are all set. Let run loop to get subset of each topic's indicator name than we will left join with required data frame.

###let used lappy on each topic lapply(i_name_sub, function(x){
## take each list as temp and get Indicator Name related to it
temp = as.character(x)
temp = subset(i_name, i_name$Topic==temp, select="Indicator Name")
##left join to get only those Indicator data and country
wdi_sub_temp = left_join(country_sub, wdi_sub)
wdi_sub_temp = left_join(temp, wdi_sub_temp)
##gather date and expand Indicator Name
wdi_sub_temp = gather(wdi_sub_temp, "years", "sample", 4:59)
colnames(wdi_sub_temp) <- c("Indicator.Name", "Country.Name","Region" ,"years", "Value")
wdi_sub_temp = dcast(wdi_sub_temp, Country.Name+years+Region~Indicator.Name, value.var = "Value", na.rm = T )
##make years as date
wdi_sub_temp$years = paste(wdi_sub_temp$years,"-01-01", sep="")
wdi_sub_temp$years=as.Date(wdi_sub_temp$years, "%Y-%m-%d")
##let make unique ID in each dataset if we want to join later on for any analysis
wdi_sub_temp$ID_for_join = paste(wdi_sub_temp$Country.Name, wdi_sub_temp$years, sep="-")
##save file
setwd(paste(filepath, "R_script/Output", sep="/"))
csvname = paste(gsub(":",",",x),".csv",paste=" ") #file name cant have ":"
write.csv(wdi_sub_temp, file=csvname, row.names = F)
setwd(filepath)
})
###total of 91 file will be produced
###You can find all 91 file #here https://www.dropbox.com/sh/sk7f7uoz9t7mb38/AACxA8gGTXZJV90CycB4uT_Ka?dl=0
##download anyfile you need and play around.
#happy coding
Advice: Don't used 'for', 'while' loop, try to avoid them as much as possible, (for if used ifelse). I know you are used to with for loop but it too slow in R. Always used apply family as far as possible no matter how small loop is. If you want to be good at R, You will have to know apply. Don't try to find other option. ( I used 'for' loop in R for long time but I had go to basic of lappy and learn it, it inevitable) 

We have all the file ready for analysis, All 91 file are available at Dropbox_WBS, You can download any file and play around.

What I like in twitter

Contact Form

Name

Email *

Message *