#%env/templates/metas.template%# #%env/templates/header.template%# #%env/templates/submenuRanking.template%#

Heuristics Configuration

A heuristic is an 'experience-based technique that help in problem solving, learning and discovery' (wikipedia). The search heuristics that can be switched on here are techniques that help the discovery of possible search results based on link guessing, in-search crawling and requests to other search engines. When a search heuristic is used, the resulting links are not used directly as search result but the loaded pages are indexed and stored like other content. This ensures that blacklists can be used and that the searched word actually appears on the page that was discovered by the heuristic.

The success of heuristics are marked with an image (heuristic:<name> (redundant)/heuristic:<name> (new link)) below the favicon left from the search result entry:
heuristic:<name> (redundant)
The search result was discovered by a heuristic, but the link was already known by YaCy
heuristic:<name> (new link)
The search result was discovered by a heuristic, not previously known by YaCy
'site'-operator: instant shallow crawl

When a search is made using a 'site'-operator (like: 'download site:yacy.net') then the host of the site-operator is instantly crawled with a host-restricted depth-1 crawl. That means: right after the search request the portal page of the host is loaded and every page that is linked on this page that points to a page on the same host. Because this 'instant crawl' must obey the robots.txt and a minimum access time for two consecutive pages, this heuristic is rather slow, but may discover all wanted search results using a second search (after a small pause of some seconds).

search-result: shallow crawl on all displayed search results add as global crawl job

When a search is made then all displayed result links are crawled with a depth-1 crawl. This means: right after the search request every page is loaded and every page that is linked on this page. If you check 'add as global crawl job' the pages to be crawled are added to the global crawl queue (remote peers can pickup pages to be crawled). Default is to add the links to the local crawl queue (your peer crawls the linked pages).

opensearch load external search result list from active systems below

When using this heuristic, then every search request line is used for a call to listed opensearch systems until enough results to fill the current search page are available. 20 results are taken from remote system and loaded simultanously, parsed and indexed immediately. To find out more about OpenSearch see OpenSearch.org

Available/Active Opensearch System #{osdcfg}# #{/osdcfg}#
Active Title Comment Url (format opensearch Url template syntax) delete
#[title]# #[comment]#
new
#[osderrmsg]#

With the button "discover from index" you can search within the metadata of your local index (Web Structure Index) to find systems which support the Opensearch specification. The task is started in the background. It may take some minutes before new entries appear (after refreshing the page). Alternatively you may copy & paste a example config file located in defaults/heuristicopensearch.conf to the DATA/SETTINGS directory. For the discover function the web graph option of the web structure index and the fields target_rel_s, target_protocol_s, target_urlstub_s have to be switched on in the webgraph Solr schema. #{osdsolrfieldswitch}##{/osdsolrfieldswitch}#
#%env/templates/footer.template%#