#%env/templates/metas.template%# #%env/templates/header.template%# #%env/templates/submenuIndexCreate.template%#

Easy Crawl Start

Start Crawling Job:  You can define URLs as start points for Web page crawling and start crawling here. "Crawling" means that YaCy will download the given website, extract all links in it and then download the content behind these links. This is repeated as long as specified under "Crawling Depth".

Attribut Value Description
Starting Point:

empty
Enter here the start url of the web crawl.
: The depth defines how deep the Crawler will follow links of links (...) and so on.

Recently started remote crawls in progress:

#{otherCrawlStartInProgress}# #{/otherCrawlStartInProgress}#
Start Time Peer Name Start URL Intention/Description Depth Accept '?' URLs
#[cre]# #[peername]# #[startURL]# #[intention]# #[generalDepth]# #(crawlingQ)#no::yes#(/crawlingQ)#

Recently started remote crawls, finished:

#{otherCrawlStartFinished}# #{/otherCrawlStartFinished}#
Start Time Peer Name Start URL Intention/Description Depth Accept '?' URLs
#[cre]# #[peername]# #[startURL]# #[intention]# #[generalDepth]# #(crawlingQ)#no::yes#(/crawlingQ)#

Remote Crawling Peers: 

#(remoteCrawlPeers)#

No remote crawl peers available.

::

#[num]# peers available for remote crawling.

Idle Peers #{available}##[name]# (#[due]# seconds due)   #{/available}#
Busy Peers #{busy}##[name]# (#[due]# seconds due)  #{/busy}#
#(/remoteCrawlPeers)# #%env/templates/footer.template%#