Utilize web scraping at scale to quickly get unlimited amounts of
free data available on the web into a structured format. This book
teaches you to use Python scripts to crawl through websites at
scale and scrape data from HTML and JavaScript-enabled pages and
convert it into structured data formats such as CSV, Excel, JSON,
or load it into a SQL database of your choice. This book goes
beyond the basics of web scraping and covers advanced topics such
as natural language processing (NLP) and text analytics to extract
names of people, places, email addresses, contact details, etc.,
from a page at production scale using distributed big data
techniques on an Amazon Web Services (AWS)-based cloud
infrastructure. It book covers developing a robust data processing
and ingestion pipeline on the Common Crawl corpus, containing
petabytes of data publicly available and a web crawl data set
available on AWS's registry of open data. Getting Structured Data
from the Internet also includes a step-by-step tutorial on
deploying your own crawlers using a production web scraping
framework (such as Scrapy) and dealing with real-world issues (such
as breaking Captcha, proxy IP rotation, and more). Code used in the
book is provided to help you understand the concepts in practice
and write your own web crawler to power your business ideas. What
You Will Learn Understand web scraping, its applications/uses, and
how to avoid web scraping by hitting publicly available rest API
endpoints to directly get data Develop a web scraper and crawler
from scratch using lxml and BeautifulSoup library, and learn about
scraping from JavaScript-enabled pages using Selenium Use AWS-based
cloud computing with EC2, S3, Athena, SQS, and SNS to analyze,
extract, and store useful insights from crawled pages Use SQL
language on PostgreSQL running on Amazon Relational Database
Service (RDS) and SQLite using SQLalchemy Review sci-kit learn,
Gensim, and spaCy to perform NLP tasks on scraped web pages such as
name entity recognition, topic clustering (Kmeans, Agglomerative
Clustering), topic modeling (LDA, NMF, LSI), topic classification
(naive Bayes, Gradient Boosting Classifier) and text similarity
(cosine distance-based nearest neighbors) Handle web archival file
formats and explore Common Crawl open data on AWS Illustrate
practical applications for web crawl data by building a similar
website tool and a technology profiler similar to builtwith.com
Write scripts to create a backlinks database on a web scale similar
to Ahrefs.com, Moz.com, Majestic.com, etc., for search engine
optimization (SEO), competitor research, and determining website
domain authority and ranking Use web crawl data to build a news
sentiment analysis system or alternative financial analysis
covering stock market trading signals Write a production-ready
crawler in Python using Scrapy framework and deal with practical
workarounds for Captchas, IP rotation, and more Who This Book Is
For Primary audience: data analysts and scientists with little to
no exposure to real-world data processing challenges, secondary:
experienced software developers doing web-heavy data processing who
need a primer, tertiary: business owners and startup founders who
need to know more about implementation to better direct their
technical team
General
Is the information for this product incomplete, wrong or inappropriate?
Let us know about it.
Does this product have an incorrect or missing image?
Send us a new image.
Is this product missing categories?
Add more categories.
Review This Product
No reviews yet - be the first to create one!