Total Pageviews

Tuesday 27 May 2014

Scrapy

Scrapy, a fast high-level screen scraping and web crawling framework for Python.
https://badge.fury.io/py/Scrapy.png https://secure.travis-ci.org/scrapy/scrapy.png?branch=master

Overview

Scrapy is a fast high-level screen scraping and web crawling framework, used to crawl websites and extract structured data from their pages. It can be used for a wide range of purposes, from data mining to monitoring and automated testing.
For more information including a list of features check the Scrapy homepage at: http://scrapy.org

Requirements

  • Python 2.7
  • Works on Linux, Windows, Mac OSX, BSD

Install

The quick way:
pip install scrapy
For more details see the install section in the documentation: http://doc.scrapy.org/en/latest/intro/install.html

Releases

You can download the latest stable and development releases from: http://scrapy.org/download/

Documentation

Documentation is available online at http://doc.scrapy.org/ and in the docs directory.

Community (blog, twitter, mail list, IRC)

See http://scrapy.org/community/

from https://github.com/scrapy/scrapy
---------

Use scrapy with a list of proxies generated from proxynova.com

scrapy-proxynova

Use scrapy with a list of proxies generated from proxynova.com
The first run will generate the list of proxies from http://proxynova.com and store it in the cache.
It will individually check each proxy to see if they work and remove the ones that timed out or cannot connect to.
Example:
./run_example.sh
To regenerate the proxy list, run: python proxies.py
In settings.py add the following line: DOWNLOADER_MIDDLEWARES = { 'scrapy_proxynova.middleware.HttpProxyMiddleware': 543 }

Options

Set these options in the settings.py.
  • PROXY_SERVER_LIST_CACHE_FILE — a file to store proxies list. Default: proxies.txt.
  • PROXY_BYPASS_PERCENT — probability for a connection to use a direct connection and not use a proxy.

from https://github.com/darthbear/scrapy-proxynova
-------------------------------------------------------------------
 
爬虫WebHubBot

WebHubBot是基于Python和MongoDB的开源网络爬虫,主要是爬取全球著名网站PornHub的视频标题、时长、mp4链接、封面URL和具体的PornHub链接,  10个线程同时请求, 爬取PornHub视频的速度可以达到500万/天以上(速度具体视个人网络情况)。

特色:
    主要使用 scrapy 爬虫框架
    从Cookie池和UA池中随机抽取一个加入到Spider
    start_requests 根据 PorbHub 的分类,启动了5个Request,同时对五个分类进行爬取。
    并支持分页爬取数据,并加入到待爬队列。

启动配置:
    安装MongoDB,并启动,不需要配置
    安装Scrapy
    安装Python的依赖模块:pymongo、json、requests
    根据自己需要修改 Scrapy 中关于 间隔时间、启动Requests线程数等得配置

启动:
    python PornHub/quickstart.py

[repo owner=”xiyouMc” name=”WebHubBot”]