Total Pageviews

Thursday 20 February 2014

导出wp博客的帖子为静态文件

my hope is that some intrepid WP user may try following these steps and use them as a starting point for a proper HOWTO.
  1. If you want comments, you'll need to switch from whatever is built in to WordPress to an outside JavaScript-based service like disqus. Disqus can import your existing comments when you set it up. Disclaimer: I have never used this service and know nothing about it - there may be better alternatives.

  2. Set up WordPress on the machine where you want to do your writing and editing. The WP site has copious instructions for all kinds of installation scenarios.

  3. Configure WP to use 'fancy' permalinks - not the default, which uses query string parameters. Basically, if there's a question mark in the URL, you can't mirror the site. If you're on OS X, you will now have to struggle with mod_rewrite and .htaccess permissions for a while.

  4. Configure WP to allow robots access (otherwise wget will not work in the next step).

  5. Use wget to crawl your new blog and turn it into a bunch of static files:
    wget --mirror -p --html-extension --convert-links http://your.local.url/
    What this does is explained in detail here. I've left off some unnecessary flags.

  6. Set up apache on your blog server to serve static content from wherever you want your blog files to live.

  7. Now copy over the static files you created with wget to their new home on the remote machine using a secure transfer method like rsync or sftp.
from http://idlewords.com/2009/09/using_wordpress_to_generate_flat_files.htm
------------------------------------

Saving Your WordPress Blog to CD


The wife has been writing her mandatory university course diary as a wordpress blog, but now she needs to hand it in.
> Can you put it on a CD for me? She asks.
Unix to the rescue!

Following this excellent article I had the site saved down to disk in a jiffy, with all links modified to work offline, all images and CSS files copied down.
For your reference, here’s the command I used.
wget --mirror -w 2 -p --html-extension --convert-links -P -H -Dwordpress.com 
~/path/to/save/locally http://yourblog.wordpress.com
Quoting Jim’s article for the meaning of the command line options: > –mirror: specifies to mirror the site. Wget will recursively follow all links on the site and download all necessary files. It will also only get files that have changed since the last mirror, which is handy in that it saves download time. > > -w: tells wget to “wait” or pause between requests, in this case for 2 seconds. This is not necessary, but is the considerate thing to do. It reduces the frequency of requests to the server, thus keeping the load down. If you are in a hurry to get the mirror done, you may eliminate this option. > > -p: causes wget to get all required elements for the page to load correctly. Apparently, the mirror option does not always guarantee that all images and peripheral files will be downloaded, so I add this for good measure. > > –html-extension: All files with a non-html extension will be converted to have an html extension. This will convert any cgi or asp generated files to html extensions for consistency. > > –convert-links: all links are converted so they will work when you browse locally. Otherwise, relative (or absolute) links would not necessarily load the right pages, and style sheets could break as well. > > -P (prefix folder): the resulting tree will be placed in this folder. This is handy for keeping different copies of the same site, or keeping a “browsable” copy separate from a mirrored copy. I’ve also added my own at the end of Jim’s version:
-H -Dwordpress.com
These options tell wget to recursively fetch any file within the .wordpress.com domain – otherwise the stylesheets and images for the blog, which are stored in different subdomains of wordpress.com, will not be downloaded.

from  http://blog.mattwynne.net/2008/04/11/saving-your-wordpress-blog-to-cd/