Hi,
Now let's begin.
Facebook developer account required to get started with this Facebook Graph API .If you don’t have Facebook developer account, you can upgrade your personal Facebook account to a Facebook Developer account from here this link.
After registering as Facebook Developer, go to “Tools & Support”->”Graph API Explorer”
To explore Graph API – Token & Permissions are required, so just click on the “Get Token”
then you will see the below image.
As public profiles included by default in permissions, just click on “Get Access Token”(you can see that red box in above image)
Then it will open new window like below and select all options and click on GetAceess button.
After that GetAccessToken will be generated (see in below image)
Now we have token, let’s explore now.
Extracting Comments from the Public Facebook Post.
Suppose below is the post, we want to analyze. Click on the Post Date Time. See below highlighted box.
Copy below Id. This is the post Id.
Go to the Graph Explorer.
Type “Post_id/comments” in below box & click on Submit then can able to see the all comments of your post in the form JSON.Now you should click Getcode option it is present at bottom that page.
Click on “Get Code” to get the cURL code. Copy this URL, we will use this URL in R.
Text Analysis in R
Now open your Rstudio ( Graphical layout )
and then install the follwing mentioned packages in Rstudio.
R Packages required:
- install.packages(“RCurl “): It allows us to compose general HTTP requests and provides convenient functions to fetch data.
- install.packages(“rjson”): It allows us to converts JSON object into R objects and vice-versa.
- install.packages(“tm”): A Mining Package for text mining applications within R. It offers a number of transformations that ease the tedium of cleaning data.
- NLP
- slam
- bitops
- RJSONIO
- wordcloud
- RColorbrewer
- tmap
- LearnBayes
- RXKCD
Type the following commands in new script.
library("RCurl")
library("NLP")
library("slam")
library("tm")
library("bitops")
library("RJSONIO")
library("wordcloud")
library("RColorBrewer")
library("tmap")
library("LearnBayes")
library("RXKCD")
url<- "https://graph.facebook.com/v2.8/378738755798904/comments?access_token=EAACEdEose0cBAMDNBeBPnHayQJYoHwajCqzX8G20jqxGUZ
Cq085T6yqDZAekeHZB2FlL4qBBAKKn5dENi98Iz4a1uOy3RL72TMIVYb6nisc7mmntvWh
9FfBOHo86IkTqWUocBByEiGfarmS1CexnfFgcAZArCvYskpVoWuTsRwZDZD"
d<- getURL(url)
j<- fromJSON(d)
comments<- sapply(j$data, function(j) {list(comment=j$message)})
comments
url used in above code is copied from cURL code from Graph Explorer.
Then you will get the all comments of your post.
Cleaning & Analyzing Data:
Creating corpus & removing extra spaces, special characters & other unwanted things.Now let's type the below commands in script.
Cleanedcomments<- sapply(comments, function(x) iconv(enc2utf8(x),sub = "byte"))
my_corpus <- Corpus(VectorSource(Cleanedcomments))
my_function<- content_transformer(function (x, pattern ) gsub(pattern, "", x))
my_cleaned_corups <- tm_map(my_corpus, my_function, "/")
my_cleaned_corups <- tm_map(my_cleaned_corups, my_function, "@")
my_cleaned_corups <- tm_map(my_cleaned_corups, my_function, "\\|")
my_cleaned_corups <- tm_map(my_cleaned_corups, content_transformer(tolower))
my_cleaned_corups <- tm_map(my_cleaned_corups, removeWords, c(stopwords("english"), "wwwmkgdroidblogspot", "wwwmkgdroidblogsp", "httpmkgdroidblogspotin201611viper4androidletvle2max2htmlm1" ))
my_cleaned_corups <- tm_map(my_cleaned_corups, removePunctuation)
my_cleaned_corups <- tm_map(my_cleaned_corups, stripWhitespace)
In above green highlighted words are removable words from all comments of post.
You can mention what words you want to eliminate from your comments.
Creating Term Document Matrix:
my_tdm<- TermDocumentMatrix(my_cleaned_corups)
m<- as.matrix(my_tdm)
View(m)
words <- sort(rowSums(m), decreasing = TRUE)
my_data <- data.frame(word = names(words), freq=words)
View(my_data)
Here is the all extracted words with frequency like below.
Creating Wordcloud:
Type this commnd in script
wordcloud(words = my_data$word, freq = my_data$freq, min.freq = 2, max.words = 100, random.order = FALSE, rot.per = 0.5, colors = brewer.pal(1, "Dark2"))
In this Word Cloud we are taking only 100 words with minimum frequency of 2.
You can see the graph like below.
Thank You all for learning this. If you have any doubts please post your comments in comment section.I will give the solutions as much as early.
Hope you wil enjoy this.
11 comments:
Because of this, it truly is superior you can applicable research before providing. It will be possible to share larger documents that way.Best SEO Company in Kanpur India
I am thankful to this blog for assisting me. I added some specified clues which are really important for me to use them in my writing skill. Really helpful stuff made by this blog. Word frequency
Great Blog with good information.
R Programming Training in Chennai
R Programming Training in Bangalore
Thank you for the useful information. Share more.
R Programming
Learn R
I read this blog, a Nice article...Thanks for sharing and waiting for the next...
Data Science Online Course
Best Online Data Science Courses
Hello! This post couldn’t be written any better! Reading this post reminds me of my previous roommate! He always kept chatting about this. I will forward this page to him. Fairly certain he will have a good read. Thank you for sharing!Website Design and Development Company
Very useful Post. I found so many interesting stuff in your Blog especially its discussion. Hire Android developer to build a robust and scalable app suited to your business requirements at the best price.
hire android app developer
Thanks for sharing such an informative Article. I really Enjoyed. It was great reading this article. Keep posting more articles on
Big Data Solutions
Advanced Data Analytics Services
Great blog with good information.
R Training in Chennai
R Programming Training in Bangalore
Great blog, it is very impressive.
How R Programming is used in Data Science
Importance of R Programming
Thanks for the blog post buddy! Keep them coming...
saas development company
Post a Comment