#%env/templates/metas.template%# #%env/templates/header.template%# #%env/templates/submenuCrawlURLFetch.template%#

URL-Fetcher

Fetch new URLs to crawl

The newly added URLs will be crawled without any filter restricions except of the static stop-words. The Re-Crawl option isn't used and the sites won't be stored in the Proxy Cache. Text and media types will be indexed. Since these URLs will be requested explicitely from another peer, they won't be distributed for remote indexing.

:
#(hostError)#:: Malformed URL#(/hostError)# #(saved)#::
:
#(/saved)#
#(peersKnown)#::
:
 : #(peerError)#::  Error fetching URL-list from #[hash]#:#[name]#::  Peer with hash #[hash]# doesn't seem to be online anymore#(/peerError)#
#(/peersKnown)#
Frequency:


:   #(freqError)#:: Invalid period, fetching only once#(/freqError)#
#(threadError)#:: Error on stopping thread, it isn't alive anymore:: Error on restarting thread, it isn't alive anymore#(/threadError)# #(runs)#::
Thread to fetch URLs is #(status)#running::stopped::paused#(/status)#
Total runs:
#[totalRuns]#
Total fetched URLs:
#[totalFetchedURLs]#
Total failed URLs:
#[totalFailedURLs]#
Last run duration:
#[lastRun]# ms
Last server response:
#[lastServerResponse]#
Last fetched URLs:
#[lastFetchedURLs]#
Last failed URLs:
#[error]#
    #{error}#
  • #[reason]#: #[url]#
  • #{/error}#
:
minutes
#(status)# :: :: #(/status)#
#(/runs)# #%env/templates/footer.template%#