---
name: crawl
version: 0.4_8
origin: www/crawl
comment: A small, efficient web crawler with advanced features
arch: freebsd:9:x86:64
www: http://www.monkey.org/~provos/crawl/
maintainer: ports@FreeBSD.org
prefix: /usr/local
licenselogic: single
flatsize: 76712
desc: |
  The crawl utility starts a depth-first traversal of the web at the
  specified URLs. It stores all JPEG images that match the configured
  constraints.  Crawl is fairly fast and allows for graceful termination.
  After terminating crawl, it is possible to restart it at exactly
  the same spot where it was terminated. Crawl keeps a persistent
  database that allows multiple crawls without revisiting sites.

  The main reason for writing crawl was the lack of simple open source
  web crawlers. Crawl is only a few thousand lines of code and fairly
  easy to debug and customize.

  Some of the main features:
   - Saves encountered JPEG images
   - Image selection based on regular expressions and size contrainsts
   - Resume previous crawl after graceful termination
   - Persistent database of visited URLs
   - Very small and efficient code
   - Supports robots.txt

  WWW: http://www.monkey.org/~provos/crawl/
deps:
  db41: {origin: databases/db41, version: 4.1.25_4}
  libevent: {origin: devel/libevent, version: 1.4.14b_2}
categories: [www]
files:
  /usr/local/bin/crawl: 6dec7a6ba482ec52e0a71d694024120538123e1d32e8a3d8d0df37b0cfa3ed09
  /usr/local/man/man1/crawl.1.gz: 261c3e3b36bada5dd590146f6efaa7cd78942c56c1857e48776faa9f8d06f50f
  /usr/local/share/examples/crawl/crawl.conf: 5cc9b931d43c51cfcc7210b51f98363da1301f25c3bb66964cb1bbbc0cddf37b
directories:
  /usr/local/share/examples/crawl/: n
scripts: {}
