Total Pageviews

Friday, 8 April 2016


I'm looking for a way to render arbitrary Web pages -- including CSS and JavaScript -- and access the resulting DOM tree programatically, i.e., in an automated/headless fashion. I want to be able to ask the following questions of the resulting DOM tree:
  • For a given element, what font family, size, and color is the text?
  • How tall and wide (in pixels) is a given <div><table>, etc.?
  • What are the x/y coordinates of a given element (from the upper-left corner of the page, or lower-left, or wherever)?
  • For a given element, what is its text content?
The rendering must be state-of-the-art, handling advanced CSS that Firefox, Safari and IE handle. It should work on Linux. Bonus points if there's a Python API for this magical DOM tree.
This is all stuff that standard in-page JavaScript could accomplish, but the catch with me is that I need to be able to do it in a completely automated way, on arbitrary pages, on a headless server.
I know Gecko and Webkit provide this, but I'm not sure where to start with them. The docs and articles I've read seem to be focused more on embedding the full browser window in a GUI application than embedding the rendering engine itself and manipulating the resulting pages.
Help! If you have any clues, I'd be grateful if you left a comment or got in touch with me.


Posted by Andrew Sutherland on May 2, 2008, at 2:45 a.m.:

PyXPCOM ( should handle the Python part of the Gecko equation.
I myself am no specific help on the gecko side of things, but I think the following post/thread on the PyXPCOM mailing list may be of assistance:

Posted by Rene Dudfield on May 2, 2008, at 3:19 a.m.:

You can set up a headless X server, then run firefox, or whatever browser with a standard build.

Posted by Michael Twomey on May 2, 2008, at 4:46 a.m.:

If you want an example of using webkit to do headless stuff you could look atwebkit2png which is a tool for taking screenshots of websites from command line. It uses webkit and pyobjc, so you'll need a mac. It doesn't do any DOM stuff that I can see but I might be a useful starting point for writing an automated tool.

Posted by Justin Mason on May 2, 2008, at 5:01 a.m.: might be useful, if you're doing this on a *NIX platform. Looks like it's well-maintained, too, since the most recent release was only a couple of weeks ago.

Posted by Gábor Farkas on May 2, 2008, at 5:10 a.m.:

in case of firefox, there are 2 issues:
1. run it somehow in a headless mode: for this, try Xvfb. it starts a headless X server. then you can run firefox in it.
2. communicate with the firefox instance. there is PyXPCOM, as others already mentioned, which could make it work.

Posted by Jason on May 2, 2008, at 7:04 a.m.:

If you want to muck in C++ code you could look at RenderTreeAsText in Webkit. For actually setting up the rendering engine, there's some relatively simple high-level apis in the wx and qt ports that seem pretty readable; the kind of api you'd use for those neat "write a web browser in 5 lines of code" demos. See WebFrame in particular. Disclaimer: I've never written anything with webkit, but it might be fun to learn.

Posted by anonymous on May 2, 2008, at 8:15 a.m.:

What about Selenium? or Watir?

Posted by anonymous on May 2, 2008, at 8:50 a.m.:

I haven't tried this (but am planning to), so I don't know if it really meets your needs, but HTMLUnit is a Java-based headless browser (designed for testing).

Posted by anonymous on May 2, 2008, at 10:15 a.m.:

Attributes such as pixel width, height, font etc will either be determined by CSS, or they will be agent (and user setup) specific.
The pixel width of a div of width 50% will depend on the size of the viewport - which of course would be anything. Do you intend to 'fake' the settings of a user agent? If so, then a simple calculation would get the pixel width (as you would know your viewport dimensions).
I really would consider seeing how far you can get by simply manipulating the dom and parsing the css (both of which are easily achieved with the python libraries urllib, lxml / beautifulsoup and cssutils).
I know, I know; None of this helps with javascript dependent attributes.

Posted by alan taylor on May 2, 2008, at 10:36 a.m.:

Have you looked at JSSh? Not sure if it fits the bill, but it just might - it's a "Mozilla C++ extension module that allows other programs (such as telnet) to establish JavaScript shell connections to a running Mozilla process via TCP/IP" I know it can return some parts of the DOM, but not sure how much detailed info you can get beack from it.

Posted by Matthew Marshall on May 2, 2008, at 10:42 a.m.:

I've played with doing this a little. The best I came up with was using PyKDE and khtml. I'm pretty sure it requires an X server, but if nothing else you could use a vnc server.

Posted by Kumar McMillan on May 2, 2008, at 11:40 a.m.:

There are probably several ways to do it, but the first that comes to mind is using the Python driver for Selenium RC ...
from selenium import selenium
# with the selenium-rc (Java) proxy sever running at localhost:4444 ...
selenium = selenium("localhost", 4444, "*firefox", "")"/")
selenium.get_html_source() # this is includes any JavaScript DOM manipulations, of course
... but I'm not sure how you get the font/text info. Selenium RC is designed to run headless and also has a "grid" implementation so you can throw more hardware at it. Scaling up to the grid is very transparent -- same code as above, more or less.

Posted by anonymous on May 2, 2008, at 12:02 p.m.:

seconding the jssh suggestion

Posted by Ryan Shaw on May 2, 2008, at 12:26 p.m.:

You might want to check out Crowbar:
Crowbar is a web scraping environment based on the use of a server-side headless mozilla-based browser. Its purpose is to allow running javascript scrapers against a DOM to automate web sites scraping but avoiding all the syntax normalization issues.

Posted by mikeal on May 2, 2008, at 1:49 p.m.:

I would go with windmill over Selenium if you're going down that road. We have far more comprehensive javascript support, you can use execJS to get back the result of any arbitrary js.
And jssh is great, but MozRepl is jssh on crack.
The whole interface is much much nicer and I'm in the middle of a Python <-> JavaScript bridge using MozRepl that I'll be sure to send you a link to once it's public.

Posted by Henning on May 2, 2008, at 2:29 p.m.:

Qt 4.4 is available on all platforms and contains a WebKit port. Fortunately the newest PyQt snapshots also contain support for WebKit. Because Qt can render every widget to a pixmap, is should be fairly easy. To run Qt headless you could use xvfb.
To access the DOM you can query with Javascript.
The following is _not_ tested:
from PyQt4.QtCore import *
from PyQt4.QtGui import *
from PyQt4.QtWebKit import *
import sys
app = QApplication(sys.argv)
browser = QWebView()
#browser.setHtml("Hello, world")
pm = QPixmap.grabWidget(browser)"website.jpg")
body = browser().page().mainFrame().evaluateJavaScript("getElementByName('body')")

Posted by anonymous on May 2, 2008, at 6:08 p.m.:

HTMLUnit is a very good headless browswer implementation. It supports different browsers and Jacascript (using Rhino I think). And finally, is under active development.
Unfortuantely, its a Java library but you could use jpython to access it.

Posted by anonymous on May 2, 2008, at 6:12 p.m.:

I looked at a few open source projects to do headless rendering.
It's tempting to use firefox/gecko but the learning curve is steep,
it's 2 mln lines of netscape legacy C++ code.
But if you figure it out you'll have a fine tool.
What is working for me now is lobo renderer (from cobra browser) (in java).
It's not the best rendering engine, but it's decent, and easy to program.
You can get rendered blocks and dom objects, and answer all the questions
as to block location, color, text etc.
It can be made to work on linux completely headless without an x server,
the way I have it working is it takes in a url or html, and saves to another
textual file format. What's important is to encapsulate your choice
of rendering engine, because it will change.
Email me at dmitrim at yahoo dot com if you need help.

Posted by Phil on May 2, 2008, at 7:31 p.m.:

Personally I'd try it with MozRepl and an X virtual framebuffer:

Posted by Daniel on May 2, 2008, at 7:46 p.m.:

As suggested above, run firefox on a virtual X server. Use a firefox extension (mozrepl or jssh) to get automated control over the browser.
I set up a system doing exactly this (for taking screenshots) last summer. In the end it barely took any code, just a fair amount of faffing with config files. Happy to give more details if it's helpful: (my first name) at

Posted by rex on May 3, 2008, at 8:44 a.m.:

I went throught trying to work out a way to do this ages ago.
Not sure if you're feeling the same, Adrian, but what bothered me (purely from a principle level) was that I really wanted to be able to do this on my server _without_ having to run a headless X server, or an instance of firefox or whatever.. i wanted a library that was able to do it.. and give back my responses without having the uneccessary overhead of a browser, x server etc running (i know very little about it... but i can't help but feel that these are uneccessary elements in the equation).
Surely there is a way to do what you're asking without having a program running that is designed to actually render the pictures on a screen... *shrug*

Posted by anonymous on May 5, 2008, at 4:55 a.m.:

rex: Rendering HTML nowadays is a heavy complex task. So there is no light library, unfortunately. It sounds like using PyQt is the smartest approach because it does not load a full appliaction but only a rendering engine you can fully control. Having a dummy X-server on Unix seems to be a necessary evil.

Posted by Eric Moritz on May 5, 2008, at 3:50 p.m.:

I was thinking of this very issue a while back:
I came across this guy's post:
He's using Rhino and some custom javascript to emulate the browser's window object。