The DomCrawler Component

The DomCrawler component eases DOM navigation for HTML and XML documents.


While possible, the DomCrawler component is not designed for manipulation of the DOM or re-dumping HTML/XML.


You can install the component in 2 different ways:


The Crawler class provides methods to query and manipulate HTML and XML documents.

An instance of the Crawler represents a set (SplObjectStorage) of DOMElement objects, which are basically nodes that you can traverse easily:

use Symfony\Component\DomCrawler\Crawler;

$html = <<<'HTML'
<!DOCTYPE html>
        <p class="message">Hello World!</p>
        <p>Hello Crawler!</p>

$crawler = new Crawler($html);

foreach ($crawler as $domElement) {
    print $domElement->nodeName;

Specialized Link and Form classes are useful for interacting with html links and forms as you traverse through the HTML tree.


The DomCrawler will attempt to automatically fix your HTML to match the official specification. For example, if you nest a <p> tag inside another <p> tag, it will be moved to be a sibling of the parent tag. This is expected and is part of the HTML5 spec. But if you’re getting unexpected behavior, this could be a cause. And while the DomCrawler isn’t meant to dump content, you can see the “fixed” version of your HTML by dumping it.

Node Filtering

Using XPath expressions is really easy:

$crawler = $crawler->filterXPath('descendant-or-self::body/p');


DOMXPath::query is used internally to actually perform an XPath query.

Filtering is even easier if you have the CssSelector component installed. This allows you to use jQuery-like selectors to traverse:

$crawler = $crawler->filter('body > p');

Anonymous function can be used to filter with more complex criteria:

use Symfony\Component\DomCrawler\Crawler;
// ...

$crawler = $crawler
    ->filter('body > p')
    ->reduce(function (Crawler $node, $i) {
        // filter even nodes
        return ($i % 2) == 0;

To remove a node the anonymous function must return false.


All filter methods return a new Crawler instance with filtered content.

Node Traversing

Access node by its position on the list:

$crawler->filter('body > p')->eq(0);

Get the first or last node of the current selection:

$crawler->filter('body > p')->first();
$crawler->filter('body > p')->last();

Get the nodes of the same level as the current selection:

$crawler->filter('body > p')->siblings();

Get the same level nodes after or before the current selection:

$crawler->filter('body > p')->nextAll();
$crawler->filter('body > p')->previousAll();

Get all the child or parent nodes:

$crawler->filter('body > p')->parents();


All the traversal methods return a new Crawler instance.

Accessing Node Values

Access the value of the first node of the current selection:

$message = $crawler->filterXPath('//body/p')->text();

Access the attribute value of the first node of the current selection:

$class = $crawler->filterXPath('//body/p')->attr('class');

Extract attribute and/or node values from the list of nodes:

$attributes = $crawler
    ->extract(array('_text', 'class'))


Special attribute _text represents a node value.

Call an anonymous function on each node of the list:

use Symfony\Component\DomCrawler\Crawler;
// ...

$nodeValues = $crawler->filter('p')->each(function (Crawler $node, $i) {
    return $node->text();

2.3 新版功能: As seen here, in Symfony 2.3, the each and reduce Closure functions are passed a Crawler as the first argument. Previously, that argument was a DOMNode.

The anonymous function receives the node (as a Crawler) and the position as arguments. The result is an array of values returned by the anonymous function calls.

Adding the Content

The crawler supports multiple ways of adding the content:

$crawler = new Crawler('<html><body /></html>');

$crawler->addHtmlContent('<html><body /></html>');
$crawler->addXmlContent('<root><node /></root>');

$crawler->addContent('<html><body /></html>');
$crawler->addContent('<root><node /></root>', 'text/xml');

$crawler->add('<html><body /></html>');
$crawler->add('<root><node /></root>');


When dealing with character sets other than ISO-8859-1, always add HTML content using the addHTMLContent() method where you can specify the second parameter to be your target character set.

As the Crawler’s implementation is based on the DOM extension, it is also able to interact with native DOMDocument, DOMNodeList and DOMNode objects:

$document = new \DOMDocument();
$document->loadXml('<root><node /><node /></root>');
$nodeList = $document->getElementsByTagName('node');
$node = $document->getElementsByTagName('node')->item(0);



Special treatment is also given to forms. A selectButton() method is available on the Crawler which returns another Crawler that matches a button (input[type=submit], input[type=image], or a button) with the given text. This method is especially useful because you can use it to return a Form object that represents the form that the button lives in:

$form = $crawler->selectButton('validate')->form();

// or "fill" the form fields with data
$form = $crawler->selectButton('validate')->form(array(
    'name' => 'Ryan',

The Form object has lots of very useful methods for working with forms:

$uri = $form->getUri();

$method = $form->getMethod();

The getUri() method does more than just return the action attribute of the form. If the form method is GET, then it mimics the browser’s behavior and returns the action attribute followed by a query string of all of the form’s values.

You can virtually set and get values on the form:

// set values on the form internally
    'registration[username]' => 'symfonyfan',
    'registration[terms]'    => 1,

// get back an array of values - in the "flat" array like above
$values = $form->getValues();

// returns the values like PHP would see them,
// where "registration" is its own array
$values = $form->getPhpValues();

To work with multi-dimensional fields:

    <input name="multi[]" />
    <input name="multi[]" />
    <input name="multi[dimensional]" />

Pass an array of values:

// Set a single field
$form->setValues(array('multi' => array('value')));

// Set multiple fields at once
$form->setValues(array('multi' => array(
    1             => 'value',
    'dimensional' => 'an other value'

This is great, but it gets better! The Form object allows you to interact with your form like a browser, selecting radio values, ticking checkboxes, and uploading files:


// check or uncheck a checkbox

// select an option

// select many options from a "multiple" select
$form['registration[interests]']->select(array('symfony', 'cookies'));

// even fake a file upload

What’s the point of doing all of this? If you’re testing internally, you can grab the information off of your form as if it had just been submitted by using the PHP values:

$values = $form->getPhpValues();
$files = $form->getPhpFiles();

If you’re using an external HTTP client, you can use the form to grab all of the information you need to create a POST request for the form:

$uri = $form->getUri();
$method = $form->getMethod();
$values = $form->getValues();
$files = $form->getFiles();

// now use some HTTP client and post using this information

One great example of an integrated system that uses all of this is Goutte. Goutte understands the Symfony Crawler object and can use it to submit forms directly:

use Goutte\Client;

// make a real request to an external site
$client = new Client();
$crawler = $client->request('GET', '');

// select the form and fill in some values
$form = $crawler->selectButton('Log in')->form();
$form['login'] = 'symfonyfan';
$form['password'] = 'anypass';

// submit that form
$crawler = $client->submit($form);