Saturday, 3 December 2016

Data Discovery vs. Data Extraction

Data Discovery vs. Data Extraction

Looking at screen-scraping at a simplified level, there are two primary stages involved: data discovery and data extraction. Data discovery deals with navigating a web site to arrive at the pages containing the data you want, and data extraction deals with actually pulling that data off of those pages. Generally when people think of screen-scraping they focus on the data extraction portion of the process, but my experience has been that data discovery is often the more difficult of the two.

The data discovery step in screen-scraping might be as simple as requesting a single URL. For example, you might just need to go to the home page of a site and extract out the latest news headlines. On the other side of the spectrum, data discovery may involve logging in to a web site, traversing a series of pages in order to get needed cookies, submitting a POST request on a search form, traversing through search results pages, and finally following all of the "details" links within the search results pages to get to the data you're actually after. In cases of the former a simple Perl script would often work just fine. For anything much more complex than that, though, a commercial screen-scraping tool can be an incredible time-saver. Especially for sites that require logging in, writing code to handle screen-scraping can be a nightmare when it comes to dealing with cookies and such.

In the data extraction phase you've already arrived at the page containing the data you're interested in, and you now need to pull it out of the HTML. Traditionally this has typically involved creating a series of regular expressions that match the pieces of the page you want (e.g., URL's and link titles). Regular expressions can be a bit complex to deal with, so most screen-scraping applications will hide these details from you, even though they may use regular expressions behind the scenes.

As an addendum, I should probably mention a third phase that is often ignored, and that is, what do you do with the data once you've extracted it? Common examples include writing the data to a CSV or XML file, or saving it to a database. In the case of a live web site you might even scrape the information and display it in the user's web browser in real-time. When shopping around for a screen-scraping tool you should make sure that it gives you the flexibility you need to work with the data once it's been extracted.

Source: http://ezinearticles.com/?Data-Discovery-vs.-Data-Extraction&id=165396

Friday, 18 November 2016

Scrape amazon and price your product the right way – A use case

Scrape amazon and price your product the right way – A use case

So you built a product that you want to sell through Amazon.

How do you price your product? 


Amazon is the world’s largest online retailer. Millions of products are sold through amazon.  a lot of people make their living selling through Amazon. One of the biggest mistake people do in Amazon is that they price their product the wrong way. Sometimes they sell overpriced products, sometimes they sell the underpriced product. Both situations are toxic for the business.

We recently worked with a company that helps small businesses sell the products efficiently through amazon and other marketplaces. One of the key things they are doing is helping people with pricing their product the right way.

What I learned from them is that price is a relative term and a lot of people does not understand it. Pricing is a function of the positioning of  your product in the market.

We need to collect the data using  a technique called web scraping to understand how to position the product. You can get the  data in a CSV file which can be used for analysis.

1) What is the average price of a comparable product?

Understanding the pricing  strategy of your competitors products  is the first step in solving the problem. This can give you a range in which you can price your product. You can get the pricing data by scraping amazon

2) Is this a premium product?

People always pay a premium price for a premium product. What makes a product premium? – A product is considered premium only when the customer believe it is worth the price. Excellent marketing and branding are the ways to position your product as a premium product. You can get the relevant data by scraping amazon.

3) What are the problems with your competitor products?

Your competitor products might be having some defects. Or they might not be addressing a relevant problem. You have every chance of success If you are solving a problem that your competitor doesn’t. You can find these problems by analyzing the product reviews of your competitors. You can get review data by scraping amazon.

By analyzing data you can reach at a point where your profit margin looks healthy and pricing looks sensible. Buyers buy the value, not your product. Differentiate your product and position it as a superior product. Give people a reason to buy and that is the only way to succeed.

Source: http://blog.datahut.co/scrape-amazon-and-price-your-product-the-right-way-a-use-case/

Thursday, 3 November 2016

Outsource Data Mining Services to Offshore Data Entry Company

Outsource Data Mining Services to Offshore Data Entry Company

Companies in India offer complete solution services for all type of data mining services.

Data Mining Services and Web research services offered, help businesses get critical information for their analysis and marketing campaigns. As this process requires professionals with good knowledge in internet research or online research, customers can take advantage of outsourcing their Data Mining, Data extraction and Data Collection services to utilize resources at a very competitive price.

In the time of recession every company is very careful about cost. So companies are now trying to find ways to cut down cost and outsourcing is good option for reducing cost. It is essential for each size of business from small size to large size organization. Data entry is most famous work among all outsourcing work. To meet high quality and precise data entry demands most corporate firms prefer to outsource data entry services to offshore countries like India.

In India there are number of companies which offer high quality data entry work at cheapest rate. Outsourcing data mining work is the crucial requirement of all rapidly growing Companies who want to focus on their core areas and want to control their cost.

Why outsource your data entry requirements?

Easy and fast communication: Flexibility in communication method is provided where they will be ready to talk with you at your convenient time, as per demand of work dedicated resource or whole team will be assigned to drive the project.

Quality with high level of Accuracy: Experienced companies handling a variety of data-entry projects develop whole new type of quality process for maintaining best quality at work.

Turn Around Time: Capability to deliver fast turnaround time as per project requirements to meet up your project deadline, dedicated staff(s) can work 24/7 with high level of accuracy.

Affordable Rate: Services provided at affordable rates in the industry. For minimizing cost, customization of each and every aspect of the system is undertaken for efficiently handling work.

Outsourcing Service Providers are outsourcing companies providing business process outsourcing services specializing in data mining services and data entry services. Team of highly skilled and efficient people, with a singular focus on data processing, data mining and data entry outsourcing services catering to data entry projects of a varied nature and type.

Why outsource data mining services?

360 degree Data Processing Operations
Free Pilots Before You Hire
Years of Data Entry and Processing Experience
Domain Expertise in Multiple Industries
Best Outsourcing Prices in Industry
Highly Scalable Business Infrastructure
24X7 Round The Clock Services

The expertise management and teams have delivered millions of processed data and records to customers from USA, Canada, UK and other European Countries and Australia.

Outsourcing companies specialize in data entry operations and guarantee highest quality & on time delivery at the least expensive prices.

Herat Patel, CEO at 3Alpha Dataentry Services possess over 15+ years of experience in providing data related services outsourced to India.

Visit our Facebook Data Entry profile for comments & reviews.

Our services helps to convert any kind of  hard copy sources, our data mining services helps to collect business contacts, customer contact, product specifications etc., from different web sources. We promise to deliver the best quality work and help you excel in your business by focusing on your core business activities. Outsource data mining services to India and take the advantage of outsourcing and save cost.

Source: http://ezinearticles.com/?Outsource-Data-Mining-Services-to-Offshore-Data-Entry-Company&id=4027029

Tuesday, 18 October 2016

Web Scraping with Python: A Beginner’s Guide

Web Scraping with Python: A Beginner’s Guide

In the Big Data world, Web Scraping or Data extraction services are the primary requisites for Big Data Analytics. Pulling up data from the web has become almost inevitable for companies to stay in business. Next question that comes up is how to go about web scraping as a beginner.

Data can be extracted or scraped from a web source using a number of methods. Popular websites like Google, Facebook, or Twitter offer APIs to view and extract the available data in a structured manner.  This prevents the use of other methods that may not be preferred by the API provider. However, the demand to scrape a website arises when the information is not readily offered by the website. Python, an open source programming language is often used for Web Scraping due to its simple and rich ecosystem. It contains a library called “BeautifulSoup” which carries on this task. Let’s take a deeper look into web scraping using python.

Setting up a Python Environment:

To carry out web scraping using Python, you will first have to install the Python Environment, which enables to run code written in the python language. The libraries perform data scraping;

Beautiful Soup is a convenient-to-use python library. It is one of the finest tools for extracting information from a webpage. Professionals can scrape information from web pages in the form of tables, lists, or paragraphs. Urllib2 is another library that can be used in combination with the BeautifulSoup library for fetching the web pages. Filters can be added to extract specific information from web pages. Urllib2 is a Python module that can fetch URLs.

For MAC OSX :

To install Python libraries on MAC OSX, users need to open a terminal win and type in the following commands, single command at a time:

sudoeasy_install pip

pip install BeautifulSoup4

pip install lxml

For Windows 7 & 8 users:

Windows 7 & 8 users need to ensure that the python environment gets installed first. Once, the environment is installed, open the command prompt and find the way to root C:/ directory and type in the following commands:

easy_install BeautifulSoup4

easy_installlxml

Once the libraries are installed, it is time to write data scraping code.

Running Python:

Data scraping must be done for a distinct objective such as to scrape current stock of a retail store. First, a web browser is required to navigate the website that contains this data. After identifying the table, right click anywhere on it and then select inspect element from the dropdown menu list. This will cause a window to pop-up on the bottom or side of your screen displaying the website’s html code. The rankings appear in a table. You might need to scan through the HTML data until you find the line of code that highlights the table on the webpage.

Python offers some other alternatives for HTML scraping apart from BeautifulSoup. They include:

    Scrapy
    Scrapemark
    Mechanize

 Web scraping converts unstructured data from HTML code into structured form such as tabular data in an Excel worksheet. Web scraping can be done in many ways ranging from the use of Google Docs to programming languages. For people who do not have any programming knowledge or technical competencies, it is possible to acquire web data by using web scraping services that provide ready to use data from websites of your preference.

HTML Tags:

To perform web scraping, users must have a sound knowledge of HTML tags. It might help a lot to know that HTML links are defined using anchor tag i.e. <a> tag, “<a href=“http://…”>The link needs to be here </a>”. An HTML list comprises <ul> (unordered) and <ol> (ordered) list. The item of list starts with <li>.

HTML tables are defined with<Table>, row as <tr> and columns are divided into data as <td>;

    <!DOCTYPE html> : A HTML document starts with a document type declaration
    The main part of the HTML document in unformatted, plain text is defined by <body> and </body> tags
    The headings in HTML are defined using the heading tags from <h1> to <h5>
    Paragraphs are defined with the <p> tag in HTML
    An entire HTML document is contained between <html> and </html>

Using BeautifulSoup in Scraping:

While scraping a webpage using BeautifulSoup, the main concern is to identify the final objective. For instance, if you would like to extract a list from webpage, a step wise approach is required:

    First and foremost step is to import the required libraries:

 #import the library used to query a website

import urllib2

#specify the url wiki = “https://”

#Query the website and return the html to the variable ‘page’

page = urllib2.urlopen(wiki)

#import the Beautiful soup functions to parse the data returned from the website

from bs4 import BeautifulSoup

#Parse the html in the ‘page’ variable, and store it in Beautiful Soup format

soup = BeautifulSoup(page)

    Use function “prettify” to visualize nested structure of HTML page
    Working with Soup tags:

Soup<tag> is used for returning content between opening and closing tag including tag.

    In[30]:soup.title

 Out[30]:<title>List of Presidents in India till 2010 – Wikipedia, the free encyclopedia</title>

    soup.<tag>.string: Return string within given tag
    In [38]:soup.title.string
    Out[38]:u ‘List of Presidents in India and Brazil till 2010 in India – Wikipedia, the free encyclopedia’
    Find all the links within page’s <a> tags: Tag a link using tag “<a>”. So, go with option soup.a and it should return the links available in the web page. Let’s do it.
    In [40]:soup.a

Out[40]:<a id=”top”></a>

    Find the right table:

As a table to pull up information about Presidents in India and Brazil till 2010 is being searched for, identifying the right table first is important. Here’s a command to scrape information enclosed in all table tags.

all_tables= soup.find_all(‘table’)

Identify the right table by using attribute “class” of table needs to filter the right table. Thereafter, inspect the class name by right clicking on the required table of web page as follows:

    Inspect element
    Copy the class name or find the class name of right table from the last command’s output.

 right_table=soup.find(‘table’, class_=’wikitable sortable plainrowheaders’)

right_table

That’s how we can identify the right table.

    Extract the information to DataFrame: There is a need to iterate through each row (tr) and then assign each element of tr (td) to a variable and add it to a list. Let’s analyse the Table’s HTML structure of the table. (extract information for table heading <th>)

To access value of each element, there is a need to use “find(text=True)” option with each element.  Finally, there is data in dataframe.

There are various other ways to scrape data using “BeautifulSoup” that reduce manual efforts to collect data from web pages. Code written in BeautifulSoup is considered to be more robust than the regular expressions. The web scraping method we discussed use “BeautifulSoup” and “urllib2” libraries in Python. That was a brief beginner’s guide to start using Python for web scraping.

Source: https://www.promptcloud.com/blog/web-scraping-python-guide

Wednesday, 28 September 2016

Scraping Yelp Business Data With Python Scraping Script

Scraping Yelp Business Data With Python Scraping Script

Yelp is a great source of business contact information with details like address, postal code, contact information; website addresses etc. that other site like Google Maps just does not. Yelp also provides reviews about the particular business. The yelp business database can be useful for telemarketing, email marketing and lead generation.

Are you looking for yelp business details database? Are you looking for scraping data from yelp website/business directory? Are you looking for yelp screen scraping software? Are you looking for scraping the business contact information from the online Yelp? Then you are at the right place.

Here I am going to discuss how to scrape yelp data for lead generation and email marketing. I have made a simple and straight forward yelp data scraping script in python that can scrape data from yelp website. You can use this yelp scraper script absolutely free.

I have used urllib, BeautifulSoup packages. Urllib package to make http request and parsed the HTML using BeautifulSoup, used Threads to make the scraping faster.
Yelp Scraping Python Script

import urllib
from bs4 import BeautifulSoup
import re
from threading import Thread

#List of yelp urls to scrape
url=['http://www.yelp.com/biz/liman-fisch-restaurant-hamburg','http://www.yelp.com/biz/casa-franco-caramba-hamburg','http://www.yelp.com/biz/o-ren-ishii-hamburg','http://www.yelp.com/biz/gastwerk-hotel-hamburg-hamburg-2','http://www.yelp.com/biz/superbude-hamburg-2','http://www.yelp.com/biz/hotel-hafen-hamburg-hamburg','http://www.yelp.com/biz/hamburg-marriott-hotel-hamburg','http://www.yelp.com/biz/yoho-hamburg']

i=0
#function that will do actual scraping job
def scrape(ur):

          html = urllib.urlopen(ur).read()
          soup = BeautifulSoup(html)

      title = soup.find('h1',itemprop="name")
          saddress = soup.find('span',itemprop="streetAddress")
          postalcode = soup.find('span',itemprop="postalCode")
          print title.text
          print saddress.text
          print postalcode.text
          print "-------------------"

threadlist = []

#making threads
while i<len(url):
          t = Thread(target=scrape,args=(url[i],))
          t.start()
          threadlist.append(t)
          i=i+1

for b in threadlist:
          b.join()

Recently I had worked for one German company and did yelp scraping project for them and delivered data as per their requirement. If you looking for scraping data from business directories like yelp then send me your requirement and I will get back to you with sample.

Source: http://webdata-scraping.com/scraping-yelp-business-data-python-scraping-script/

Monday, 19 September 2016

Run Code Template – New Feature Added to Fminer Web Scraping Tool

Run Code Template – New Feature Added to Fminer Web Scraping Tool

Fminer is one of the powerful web scraping software, I already given brief of all the Fminer features in previous post. In this post I am going to introduce one of the interesting feature of fminer which is Run Code Template that is recently added to Fminer, this feature is similar to “Fminer Run Code” action but it’s different in a way you can use it. The Run Code Action you can use inside the data scraping flow and python code get executed when scraper start running.

While Run Code Templates are the saved python code snippets that you can run on the data tables after scraping completes. Assume if you get white space in scraped data then you can easily trim this left and right spaces by just executing “strip_column” template, see the code of that template below.

'''Strip all data of a column in data table
Remove the blank of data in the head and the tail.
'''

tabName = '[%table1|data table%]'
colName = '[%table1.column1|table column for strip%]'

tab = tables[tabName]
for i, row in enumerate(tab):
    row[colName] = row[colName].strip()   
    tab.edit_row(i, row)

This template comes with Fminer and few other template like “merge_tables_with_same_columns”.  Below are the steps how you can execute template python code on scraped data.

Step 1: Click on second icon from right that says “Run Code” under the Data section

Step 2: One popup will appear, you need to click on “Templates” icon and choose the template you want to execute and then click on Ok.

Step 3: Now the window will appear for configuration that will ask you to choose the table and column under that table on which you want to execute the code. Now click on Ok again.

Step 4: Now you can see the code of that template, now you can click on execute icon and script will start running, based on number of records it will take time to finish execution.

In many web scraping projects I found this template code very handy for cleaning data and making life easy. Templates are stored at following path so you can create your own template with customized code.

C:\Program Files (x86)\FMiner\templates

I have created one template which I use to remove HTML code that comes while scraping badly organized HTML pages. Below is the code of template for stripping html:

'''Strip HTML will remove all html tags of a column in data table.
'''
import re
tabName = '[%table1|data table%]'
colName = '[%table1.column1|table column for substring%]'
colNew = '[%table1.column1|table column to add new data%]'
tab = tables[tabName]
for i, row in enumerate(tab):
    cleanr =re.compile('<.*?>')
    cleantext = re.sub(cleanr,'', row[colName])
    row[colNew] = cleantext 
    tab.edit_row(i, row)

Stay connected as I am going to post more code templates that will make your web scraping life easy and manipulate data on fly.

Source: http://webdata-scraping.com/run-code-template-new-feature-added-fminer-web-scraping-tool/

Tuesday, 6 September 2016

Calculate your ROI on Web Scraping using our ROI Calculator

Calculate your ROI on Web Scraping using our ROI Calculator

Staying atop the competition is a vital thing for the survival and growth of businesses these days. Ever since big data came into the picture, web scraping has become something businesses from every industry has to invest in. If your company is not in a technically advanced industry, web scraping could even be a nightmare to start with. Wondering if going with in-house web scraping is right for you? In house or outsourcing, in the end it’s all about the returns on investment.

ROI Calculator

Considering the numerous factors that determine how much web scraping can cost you, it’s not easy to calculate the ROI on your in-house web scraping.

In house web scraping is certainly a challenging process. If you plan on going down this way, here is a brief list of prerequisites.

Engineers

Technically skilled labour is an essential requirement for web scraping. Since, web scraping techniques are complicated, it needs good programming skills to write, run and maintain the scraping bots. The cost of labour can be one of the drawbacks with doing in house web scraping.

Hardware Resources

Web scraping is a resource hungry process which requires high end servers and lots of bandwidth. Without the adequate resources, you might end up losing important data. The cost of quality servers could easily make you want to reconsider doing web scraping on your own. Not to mention the doubling up of these resources in order to keep the data intact, espcially if you’re looking at large scale.

Maintainability and ukeep of your tech stack

Once you have your servers and other technical components setup, the real deal only starts. You have to ensure availability of your servers, data backups, restoring previous states, failovers, among many other complications associated with managing servers and fixing them up when something goes wrong. You need to allocate resources (both people and hardware) to take care of the above.

Time

Time is something that we cannot really include in the equation when it comes to calculating the returns. But it is definitely a factor that defines if web scraping in house is worth it. Although web scraping is the fastest way to acquire data, the initial setup and maintenance are time consuming and complicated. This could easily lead to conflicts when you have to distribute your time between web scraping and other business activities that are crucial for your company.

Try the ROI Calculator

We came up with an ROI calculator to easily calculate your returns on investment with our web scraping services. Using this, you could easily compare the cost of in house web scraping with PromptCloud’s dedicated web scraping services. Find out how much you can save by going the PromptCloud way.

Source: https://www.promptcloud.com/blog/calculate-roi-on-web-scraping