Total Pageviews

Monday, 25 September 2017

Python中文社区

媒体
微信公众号:Python中文社区
知 乎 专栏:Python中文社区
简          书:Python中文社区
微          博:Python中文社区
UC 订阅号:Python中文社区
开发者头条团队号:Python中文社区(代号183367)

社群

社区 QQ群:152745094

Github组织

PyCN内部Gitter:https://gitter.im/PyCN
PyCN QQ群:596796724

专栏作者

阿橙 (@sinoandywong)    段小草 (@loveQt)
ZZR (@zzr0427)         小丸子 (@abitch)
PytLab (@PytLab)      中华 (@hectorhua)
熊球 (@XiongQiuQiu) Kaito (@kaito-kidd)
九茶 (@liuxingming)   七夜 (@qiyeboy)
苍冥 (@eastrd)  哇咔咔 (@A1014280203)
雷霰霆 (@noif)  时空Drei (@stdrei)
jay (@juie)

字幕组

eastrd (@eastrd)    ictar (@ictar)
linchart (@linchart)         szthanatos (@szthanatos)
alex-marmot (@alex-marmot)      heyuanree (@heyuanree)
sinoandywong (@sinoandywong) Yauchee (@Yauchee)
sxqs-yang (@sxqs-yang)   zilongcc (@zilongcc)
bubuyo (@bubuyo)  yifan1024 (yifan1024)
YoungZiyi (@YoungZiyi)  fuckexception (fuckexception)
Lving (@Lving)  LoveSn0w (LoveSn0w)
phdhorse41 (@phdhorse41)  minminzhong (minminzhong)

刊物

  英文名称:PyCN Technology Review,简称:PTR
  Github开源地址: https://github.com/PyCN/PTR

联系我们

邮箱:pythoncn2016@gmail.com
微信:AndyWong188

一个基于Django + MySQL + redis的博客程序

A blog based on Django + MySQL + redis + celery 

A free, open-source blog system based on Django + MySQL + jQuery + bootstrap + markdown
目前已实现功能:
1、用户注册、登陆、上传用户头像
2、文章发表,分类,标签集合,最热文章、最新评论展示
3、评论, 点赞, 显示用户评论过的文章
4、文件上传、下载
5、用户权限控制
6、二维码转换
7、站点缓存
8、完整的测试
9、jQuery, bootstrap, markdown支持
10、haystack + whoosh + jieba 搜索
11、logging记录log信息
12、主从数据库(详情可以进入网站http://cblog.xyz)
目前项目已部署在阿里云服务器中,网址http://cblog.xyz
使用说明:
1、安装依赖包
$ sudo pip install -r requirements.txt
ubuntu
   $ sudo apt-get install redis-server  
   $ sudo apt-get install rabbitmq-server  
centos
   $ sudo yum install nginx
   $ sudo setsebool httpd_can_network_connect on -P 
   $ sudo yum install redis
   $ sudo systemctl start redis.service
   $ sudo systemctl enable redis.service # 开机启动

   $ wget http://dev.mysql.com/get/mysql-community-release-el7-5.noarch.rpm
   $ sudo rpm -ivh mysql-community-release-el7-5.noarch.rpm
   $ sudo yum install mysql-community-server
   $ sudo yum install mysql-community-devel
   $ sudo service mysqld restart
   $ sudo systemctl enable mysql.service # 开机启动

   $ wget https://github.com/rabbitmq/rabbitmq-server/releases/download/rabbitmq_v3_6_10/rabbitmq-server-3.6.10-1.el7.noarch.rpm
   $ rpm --import https://www.rabbitmq.com/rabbitmq-release-signing-key.asc
   $ sudo yum install rabbitmq-server-3.6.10-1.el7.noarch.rpm
   $ sudo systemctl enable rabbitmq-server.service # 开机启动

2、创建MySQL数据库
   创建mysql root密码: $ mysqladmin -u root password "newpass"
   修改mysql时区 
   $ sudo vim /etc/my.cnf # 在[mysqld]下添加:default-time-zone='+8:00'
   $ mysql_tzinfo_to_sql /usr/share/zoneinfo | mysql -D mysql -u root -p 
   在linux shell中登陆mysql: $mysql -u root -p  
   创建Blog数据库:           mysql>CREATE DATABASE `Blog` DEFAULT CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;  
   为用户授权:               mysql>grant all on Blog.* to your_username@localhost identified by 'your_password';  
   退出数据库:               mysql>exit  
3、在settings.py所在目录创建个人配置文件mysettings.py(或者直接修改settings.py中的DATABASE配置)
   #coding:utf-8  
   DEBUG = True  
   DATABASES = {  
       'default': {  
           'ENGINE': 'django.db.backends.mysql',  
           'NAME': 'Blog',  
           'USER': 'your_username',  
           'PASSWORD': 'your_password',  
           'HOST': '127.0.0.1',  
           'PORT': '3306',
           'OPTIONS': {'charset': 'utf8mb4'}
        }  
   }  
4、创建数据库table
在manage.py目录执行:$ python manage.py migrate
5、运行服务器
$ python manage.py runserver 8080
接下来就可以在浏览器访问localhost:8080
ps:如果用apache2或者nginx运行时,访问具体文章速度很慢时,可以查看apache2或者nginx的log。如果是jieba那边提示Operation not permitted,那么就查看/tmp/jieba.cache的权限是否为774,如果不是请修改为774,

awesome-nlp

A curated list of resources dedicated to Natural Language Processing (NLP).

 Awesome

A curated list of resources dedicated to Natural Language Processing
Maintainers - Keon KimMartin Park
Please read the contribution guidelines before contributing.
Please feel free to create pull requests, or email Martin Park (sp3005@nyu.edu)/Keon Kim (keon.kim@nyu.edu) to add links.

Table of Contents

Tutorials and Courses

  • Tensor Flow Tutorial on Seq2Seq Models
  • Natural Language Understanding with Distributed Representation Lecture Note by Cho
  • Michael Collins - one of the best NLP teachers. Check out the material on the courses he is teaching.
  • Several tutorials by Radim Řehůřek on using Python and gensim to process corpora and conduct Latent Semantic Analysis and Latent Dirichlet Allocation experiments.

videos

Deep Learning for NLP

Deep Natural Language Processing +lecture slides and course description for the Deep Natural Language Processing course offered in Hilary Term 2017 at the University of Oxford. Stanford CS 224D: Deep Learning for NLP class Class by Richard Socher. 2016 content was updated to make use of Tensorflow. Lecture slides and reading materials for 2016 class here. Videos for 2016 class here. Note that there are some lecture videos missing for 2016 (lecture 9, and lectures 12 onwards). All videos for 2015 class here
Udacity Deep Learning Deep Learning course on Udacity (using Tensorflow) which covers a section on using deep learning for NLP tasks. This section covers how to implement Word2Vec, RNN's and LSTMs.
A Primer on Neural Network Models for Natural Language Processing Yoav Goldberg. October 2015. No new info, 75 page summary of state of the art.

Packages

Implementations

Libraries

    • Twitter-text - A JavaScript implementation of Twitter's text processing library
    • Knwl.js - A Natural Language Processor in JS
    • Retext - Extensible system for analyzing and manipulating natural language
    • NLP Compromise - Natural Language processing in the browser
    • Natural - general natural language facilities for node
    • Scikit-learn: Machine learning in Python
    • Natural Language Toolkit (NLTK)
    • Pattern - A web mining module for the Python programming language. It has tools for natural language processing, machine learning, among others.
    • TextBlob - Providing a consistent API for diving into common natural language processing (NLP) tasks. Stands on the giant shoulders of NLTK and Pattern, and plays nicely with both.
    • YAlign - A sentence aligner, a friendly tool for extracting parallel sentences from comparable corpora.
    • jieba - Chinese Words Segmentation Utilities.
    • SnowNLP - A library for processing Chinese text.
    • KoNLPy - A Python package for Korean natural language processing.
    • Rosetta - Text processing tools and wrappers (e.g. Vowpal Wabbit)
    • BLLIP Parser - Python bindings for the BLLIP Natural Language Parser (also known as the Charniak-Johnson parser)
    • PyNLPl - Python Natural Language Processing Library. General purpose NLP library for Python. Also contains some specific modules for parsing common NLP formats, most notably for FoLiA, but also ARPA language models, Moses phrasetables, GIZA++ alignments.
    • python-ucto - Python binding to ucto (a unicode-aware rule-based tokenizer for various languages)
    • Parserator - A toolkit for making domain-specific probabilistic parsers
    • python-frog - Python binding to Frog, an NLP suite for Dutch. (pos tagging, lemmatisation, dependency parsing, NER)
    • python-zpar - Python bindings for ZPar, a statistical part-of-speech-tagger, constiuency parser, and dependency parser for English.
    • colibri-core - Python binding to C++ library for extracting and working with with basic linguistic constructions such as n-grams and skipgrams in a quick and memory-efficient way.
    • spaCy - Industrial strength NLP with Python and Cython.
    • textacy - Higher level NLP built on spaCy
    • PyStanfordDependencies - Python interface for converting Penn Treebank trees to Stanford Dependencies.
    • gensim - Python library to conduct unsupervised semantic modelling from plain text
    • scattertext - Python library to produce d3 visualizations of how language differs between corpora.
    • CogComp-NlPy - Light-weight Python NLP annotators.
    • PyThaiNLP - Thai NLP in Python Package.
    • jPTDP - A toolkit for joint part-of-speech (POS) tagging and dependency parsing. jPTDP provides pre-trained models for 40+ languages.
    • CLTK: The Classical Language Toolkit is a Python library and collection of texts for doing NLP in ancient languages.
    • MIT Information Extraction Toolkit - C, C++, and Python tools for named entity recognition and relation extraction
    • CRF++ - Open source implementation of Conditional Random Fields (CRFs) for segmenting/labeling sequential data & other Natural Language Processing tasks.
    • CRFsuite - CRFsuite is an implementation of Conditional Random Fields (CRFs) for labeling sequential data.
    • BLLIP Parser - BLLIP Natural Language Parser (also known as the Charniak-Johnson parser)
    • colibri-core - C++ library, command line tools, and Python binding for extracting and working with basic linguistic constructions such as n-grams and skipgrams in a quick and memory-efficient way.
    • ucto - Unicode-aware regular-expression based tokenizer for various languages. Tool and C++ library. Supports FoLiA format.
    • libfolia - C++ library for the FoLiA format
    • frog - Memory-based NLP suite developed for Dutch: PoS tagger, lemmatiser, dependency parser, NER, shallow parser, morphological analyzer.
    • MeTA - MeTA : ModErn Text Analysis is a C++ Data Sciences Toolkit that facilitates mining big text data.
    • Mecab (Japanese)
    • Mecab (Korean)
    • Moses
    • Stanford NLP
    • OpenNLP
    • ClearNLP
    • Word2vec in Java
    • ReVerb Web-Scale Open Information Extraction
    • OpenRegex An efficient and flexible token-based regular expression language and engine.
    • CogcompNLP - Core libraries developed in the U of Illinois' Cognitive Computation Group.
    • MALLET - MAchine Learning for LanguagE Toolkit - package for statistical natural language processing, document classification, clustering, topic modeling, information extraction, and other machine learning applications to text.
    • RDRPOSTagger - A robust POS tagging toolkit available (in both Java & Python) together with pre-trained models for 40+ languages.
    • Saul - Library for developing NLP systems, including built in modules like SRL, POS, etc.
    • ATR4S - Toolkit with state-of-the-art automatic term recognition methods.
    • tm - Implementation of topic modeling based on regularized multilingual PLSA.
    • word2vec-scala - Scala interface to word2vec model; includes operations on vectors like word-distance and word-analogy.
    • Epic - Epic is a high performance statistical parser written in Scala, along with a framework for building complex structured prediction models.
    • text2vec - Fast vectorization, topic modeling, distances and GloVe word embeddings in R.
    • wordVectors - An R package for creating and exploring word2vec and other word embedding models
    • RMallet - R package to interface with the Java machine learning tool MALLET
    • dfr-browser - Creates d3 visualizations for browsing topic models of text in a web browser.
    • dfrtopics - R package for exploring topic models of text.
    • sentiment_classifier - Sentiment Classification using Word Sense Disambiguation and WordNet Reader
    • jProcessing - Japanese Natural Langauge Processing Libraries, with Japanese sentiment classification
    • Clojure-openNLP - Natural Language Processing in Clojure (opennlp)
    • Infections-clj - Rails-like inflection library for Clojure and ClojureScript
    • postagga - A library to parse natural language in Clojure and ClojureScript
    • whatlang — Natural language recognition library based on trigrams

Services

  • Wit-ai - Natural Language Interface for apps and devices.
  • Iris - Free text search API over large public document collections.

Articles

Review Articles

Word Vectors

Resources about word vectors, aka word embeddings, and distributed representations for words. Word vectors are numeric representations of words that are often used as input to deep learning systems. This process is sometimes called pretraining.
Efficient Estimation of Word Representations in Vector Space [Distributed Representations of Words and Phrases and their Compositionality] (http://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdfMikolov et al. 2013. Generate word and phrase vectors. Performs well on word similarity and analogy task and includes Word2Vec source code Subsamples frequent words. (i.e. frequent words like "the" are skipped periodically to speed things up and improve vector for less frequently used words Word2Vec tutorial in TensorFlow
Deep Learning, NLP, and Representations Chris Olah (2014) Blog post explaining word2vec.
GloVe: Global vectors for word representation Pennington, Socher, Manning. 2014. Creates word vectors and relates word2vec to matrix factorizations. Evalutaion section led to controversy by Yoav Goldberg Glove source code and training data

Thought Vectors

Thought vectors are numeric representations for sentences, paragraphs, and documents. The following papers are listed in order of date published, each one replaces the last as the state of the art in sentiment analysis.
Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank Socher et al. 2013. Introduces Recursive Neural Tensor Network. Uses a parse tree.
Distributed Representations of Sentences and Documents Le, Mikolov. 2014. Introduces Paragraph Vector. Concatenates and averages pretrained, fixed word vectors to create vectors for sentences, paragraphs and documents. Also known as paragraph2vec. Doesn't use a parse tree. Implemented in gensim. See doc2vec tutorial
Deep Recursive Neural Networks for Compositionality in Language Irsoy & Cardie. 2014. Uses Deep Recursive Neural Networks. Uses a parse tree.
Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks Tai et al. 2015 Introduces Tree LSTM. Uses a parse tree.
Semi-supervised Sequence Learning Dai, Le 2015 "With pretraining, we are able to train long short term memory recurrent networks up to a few hundred timesteps, thereby achieving strong performance in many text classification tasks, such as IMDB, DBpedia and 20 Newsgroups."

Machine Translation

Neural Machine Translation by jointly learning to align and translate Bahdanau, Cho 2014. "comparable to the existing state-of-the-art phrase-based system on the task of English-to-French translation." Implements attention mechanism. English to French Demo
Sequence to Sequence Learning with Neural Networks Sutskever, Vinyals, Le 2014. (nips presentation). Uses LSTM RNNs to generate translations. " Our main result is that on an English to French translation task from the WMT’14 dataset, the translations produced by the LSTM achieve a BLEU score of 34.8" seq2seq tutorial in

Single Exchange Dialogs

Neural Responding Machine for Short-Text Conversation Shang et al. 2015 Uses Neural Responding Machine. Trained on Weibo dataset. Achieves one round conversations with 75% appropriate responses.
A Neural Conversation Model Vinyals, Le 2015. Uses LSTM RNNs to generate conversational responses. Uses seq2seq framework. Seq2Seq was originally designed for machine transation and it "translates" a single sentence, up to around 79 words, to a single sentence response, and has no memory of previous dialog exchanges. Used in Google Smart Reply feature for Inbox

Memory and Attention Models (from DL4NLP)

Memory Networks Weston et. al 2014, and End-To-End Memory Networks Sukhbaatar et. al 2015. Memory networks are implemented in MemNN. Attempts to solve task of reason attention and memory. Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks Weston 2015. Classifies QA tasks like single factoid, yes/no etc. Extends memory networks. Evaluating prerequisite qualities for learning end to end dialog systems Dodge et. al 2015. Tests Memory Networks on 4 tasks including reddit dialog task. See Jason Weston lecture on MemNN
Neural Turing Machines Graves et al. 2014.

General Natural Language Processing

Named Entity Recognition

Neural Network

Supplementary Materials

Blogs

Credits

part of the lists are from