Web scraping is a method for collecting, organizing and analyzing information that is spread over the Internet in a disorganized way. It can automatically retrieve the data and transform it with a usable structure for us.
The best known way is to use Selenium with Python to do web scraping. I have written a tutorial about it to make it easy to apply.
1. Scrape and save
When traversing large websites, it is always good to store the data you have previously downloaded. So you don’t have to scrape the same thing again, in case the program crashes before finishing the process. Storing in a key-value format like Redis is simple. However, you can also use MySQL or any other file system caching mechanism.
2. Optimize requests
Large web sites deploy services that can track the crawl on a site. If you are sending simultaneous requests from the same IP address, they will classify you as a DoS (Denial Of Service) attack on their website, and block you instantly. Therefore, it is advisable to review your requests and chain them correctly one after the other, making them more human-like. Determine the average response time of websites, and then decide the number of simultaneous requests to the site.
3. Make URL table
Keep a table of URLs for all the links you’ve already crawled, in a table or inside a key-value store. It will save you if the crawler crashes when you are about to finish. Without this list of URLs, a lot of time and bandwidth would be consumed in vain. Therefore, you should make sure to persist the list of URLs.
4. Scraping in phases
It is simpler and safer if you cut the scraper into several short phases. For example, you could split the scraping of a large site into two. One to accumulate links to the pages from which you need to obtain data and another to download these pages to analyze their content.
5. Navigation filtering
Do not process every link unless necessary. Instead, program a proper crawling algorithm to make the scraper go through the most requested pages. It’s natural to always be tempted to go after everything. But it would be a total waste of bandwidth, time and storage.
6. Look for the native API
Most sites expose APIs for programmers to get the data. They also provide supporting documentation. If the site has an API, you don’t need to program a scraper, unless the data you want is not provided by the API. So, just read their requirements and their data usage policy.
7. Check if it returns a JSON
If the site does not expose an API and you still need its data, then look for some server-side JSON request, you may find the data you are looking for.
From some browser, press F12 to get the developer tools window. Reload the web page, and go to the Network tab to see the records ending in .json, you can identify the URL it came from. Then open a new tab and paste that link and JSON will be displayed with the data.
If you are thinking of creating a website for your company, I will upload tips soon, stay tuned!
8. Proxies
Proxies will help us to hide our IP and as a result will allow us to make more requests to the same server without being banned. In social networks is very frequent the banning of IPs.
9. Change User Agent
The user agent is a text string that allows servers to identify from which device we connect. We will be able to connect as if we were an Iphone, Android, etc.