Commit Graph

5074 Commits

Author SHA1 Message Date
Michael Peter Christen
8df8ffbb6d enhanced the snapshot functionality:
- snapshots can now also be xml files which are extracted from the solr
index and stored as individual xml files in the snapshot directory along
the pdf and jpg images
- a transaction layer was placed above of the snapshot directory to
distinguish snapshots into 'inventory' and 'archive'. This may be used
to do transactions of index fragments using archived solr search results
between peers. This is currently unfinished, we need a protocol to move
snapshots from inventory to archive
- the SNAPSHOT directory was renamed to snapshot and contains now two
snapshot subdirectories: inventory and archive
- snapshots may now be generated by everyone, not only such peers
running on a server with xkhtml2pdf installed. The expert crawl starts
provides the option for snapshots to everyone. PDF snapshots are now
optional and the option is only shown if xkhtml2pdf is installed.
- the snapshot api now provides the request for historised xml files,
i.e. call:
http://localhost:8090/api/snapshot.xml?urlhash=Q3dQopFh1hyQ
The result of such xml files is identical with solr search results with
only one hit.
The pdf generation has been moved from the http loading process to the
solr document storage process. This may slow down the process a lot and
a different version of the process may be needed.
2014-12-09 16:20:34 +01:00
Michael Peter Christen
4111d42c81 Merge branch 'master' of ssh://git@gitorious.org/yacy/rc1.git 2014-12-08 12:40:12 +01:00
Michael Peter Christen
793ce6d13b added confirmation dialogs for row deletion 2014-12-08 11:41:28 +01:00
Michael Peter Christen
cdc21d43b1 more robustness for broken table data in Table_API_p.html -- see bug
report http://mantis.tokeek.de/view.php?id=495
2014-12-08 11:35:40 +01:00
reger
1d3ea35d69 prevent NPE on host link for to short HeuristicCfg.OpenSearchURL 2014-12-08 01:35:37 +01:00
Michael Peter Christen
a95af11050 enhancement for clearing the crawl queue 2014-12-07 23:43:38 +01:00
reger
5f0bb1214f modified FieldReIndex to reindex queries with low number of documents first
by using a internally a score map with number of documents as score
and working through the list from low to high.
2014-12-07 04:31:09 +01:00
Michael Peter Christen
d97deb5555 npe fix 2014-12-06 00:43:12 +01:00
Michael Peter Christen
4fe4bf29ad added rss feed output to snapshot servlet which can be used to get a
list of latest/oldest entries in the snapshot database. This is an
example:
http://localhost:8090/api/snapshot.rss?depth=2&order=LATESTFIRST&host=yacy.net&maxcount=100

The properties depth, order, host and maxcount can be omited. The
meaning of the fields are:
host: select only urls from this host or all, if not given
depth: select only urls at that crawl depth or all, if not given
maxcount: select at most the given number of urls or 10, if not given
order: either LATESTFIRST to select the youngest entries, OLDESTFIRST to
select the first entries or ANY to select any

The rss feed needs administration rights to work, a call to this servlet
with rss extension must attach login credentials.
2014-12-06 00:25:05 +01:00
reger
d6539ba597 Merge origin/master 2014-12-05 01:15:41 +01:00
reger
ff18129def ViewFile servlet: update index if newer,
so viewed text and metadata (stored) info is similar
- to archive it, use request with profile to allow indexing (defaultglobaltext) and update index 
   (the resource is loaded, parsed anyway, so it's not a expensive operation)

Request: remove 2 unused init parameter 
- number of anchors of the parent
- forkfactor sum of anchors of all ancestors
2014-12-05 01:13:37 +01:00
Michael Peter Christen
d83de9ecf5 added another path for the convert command because on older Macs
ImageMagick has a different installation location
2014-12-03 18:07:05 +01:00
Michael Peter Christen
226aea5914 added a servlet which can create preview images, preview tumbnails and
preview pdfs from web pages, i.e.:
http://localhost:8090/api/snapshot.png?url=http://yacy.net/en/&width=128&height=128
http://localhost:8090/api/snapshot.jpg?url=http://yacy.net/en/&width=128&height=128
http://localhost:8090/api/snapshot.pdf?url=http://yacy.net/en/

This supports also an on-the-fly generation of the preview documents if
the user is an administrator. Otherwise, the servlet fails.
To enable this, you must add wkhtmltopdf, imagemagick and (on headless
servers) xvfb to your operation system.

for detailed instructions, see
97f6089a41
2014-12-03 11:45:48 +01:00
Michael Peter Christen
181911376c showing list of all thread in threaddump using the ThreadMXBean counter
(this obviously show more threads than before?)
2014-12-02 16:21:06 +01:00
Michael Peter Christen
64887f6b21 show number of threads on status page 2014-12-02 16:04:11 +01:00
Michael Peter Christen
6f0167fac1 get cloned crawl start parameter for snapshots 2014-12-02 12:52:05 +01:00
Michael Peter Christen
97f6089a41 YaCy can now create web page snapshots as pdf documents which can later
be transcoded into jpg for image previews. To create such pdfs you must
do:

Add wkhtmltopdf and imagemagick to your OS, which you can do:
On a Mac download wkhtmltox-0.12.1_osx-cocoa-x86-64.pkg from
http://wkhtmltopdf.org/downloads.html and downloadh
ttp://cactuslab.com/imagemagick/assets/ImageMagick-6.8.9-9.pkg.zip
In Debian do "apt-get install wkhtmltopdf imagemagick"

Then check in /Settings_p.html?page=ProxyAccess: "Transparent Proxy" and
"Always Fresh" - this is used by wkhtmltopdf to fetch web pages using
the YaCy proxy. Using "Always Fresh" it is possible to get all pages
from the proxy cache.

Finally, you will see a new option when starting an expert web crawl.
You can set a maximum depth for crawling which should cause a pdf
generation. The resulting pdfs are then available in
DATA/HTCACHE/SNAPSHOTS/<host>.<port>/<depth>/<shard>/<urlhash>.<date>.pdf
2014-12-01 15:03:09 +01:00
Michael Peter Christen
41d00350e4 moved network configuration to Use Case submenu; this is necessary
because the definiton of portal peers within the YaCy freeworld network
is otherwise splitted into two different main menus.
2014-12-01 01:12:51 +01:00
reger
221f86dd5e position api icon (ViewFile.html) 2014-11-30 01:58:14 +01:00
Michael Peter Christen
ad0da5f246 added new web page snapshot infrastructure which will lead to the
ability to have web page previews in the search results.
(This is a stub, no function available with this yet...)
2014-11-29 11:56:32 +01:00
reger
c475be2937 fix (enable) error msg on empty query 2014-11-28 22:44:33 +01:00
reger
f709132961 remove obsolete alternate link
fix api link
2014-11-28 01:40:46 +01:00
Michael Peter Christen
3c71e1c872 show vocabularies in search result (in case of debugging) 2014-11-28 01:19:31 +01:00
Michael Peter Christen
2fce2e2697 larger boost fields for ranking 2014-11-27 12:11:54 +01:00
Michael Peter Christen
6c03ff8355 bold words in snippets should not be coloured black in the base style
because there are styles with dark backgrounds which make the bold word
invisible
2014-11-27 08:08:05 +01:00
Michael Peter Christen
c0f9f6ac66 added option to change the navbar-default, i.e. usable for dark skins 2014-11-26 18:01:35 +01:00
Michael Peter Christen
84763126e0 added option to make the YaCy proxy act as the cache is never stale. If
set to 'Always Fresh' the cache is always used if the entry in the cache
exist. This is a good way to archive web content and access it without
going online again in case the documents exist.
To do so, open /Settings_p.html?page=ProxyAccess and check the "Always
Fresh" checkbox.
This is set do false which behave as set before.
If you set this to true, then you have your web archive in DATA/HTCACHE.
Copy this to carry around your private copy of the internet!
2014-11-24 20:28:52 +01:00
Michael Peter Christen
5bb52f79be reduce number of calls to queue.size() because that may be a bottleneck
during crawling
2014-11-23 20:09:32 +01:00
Michael Peter Christen
092d97d7ac when importing vocabulary csv files, accept also files without semicolon
and truncate quotes from literals
2014-11-21 12:42:29 +01:00
Michael Peter Christen
ee9ec40048 added hints to ranking to make ranking boosts using vocabularies easier 2014-11-20 18:46:06 +01:00
Michael Peter Christen
70f03f7c8e do not cache search requests to Solr if the result is used for
doublechecking. If a double-check comes from cached results the
doublecheck fails.
2014-11-20 18:45:27 +01:00
Michael Peter Christen
a0b84e4def use a LinkedHashMap for factes to maintain facet order as given by solr 2014-11-20 18:44:29 +01:00
Michael Peter Christen
0dc6e0a5f2 added option to enrich vocabularies with synonyms from synonym database 2014-11-19 18:12:43 +01:00
Michael Peter Christen
6a2a669db4 added loading of the synonyms file from addon/synonyms into the
knowledge loader
2014-11-19 17:36:56 +01:00
Michael Peter Christen
fdba8e2fa0 fix for 2-day network stats table: showing 48 instead of 24 hours from
peer history
2014-11-17 14:23:21 +01:00
Michael Peter Christen
ec9d021568 added option in vocabulary editor to import CSV files with different
encodings (preselected windows-type character encoding which is typical
for CSV files). Fixed also other problems with character encoding in
dictionary files. Automatically generated vocabularies are now also
noted in the API steering.
2014-11-17 14:22:40 +01:00
reger
b558433211 adjust tag cloud font size calculation
to limit max font size to ~ TOPWORDS_MAXSIZE
2014-11-17 01:24:30 +01:00
Michael Peter Christen
0550b54d56 added fix to postprocessing: avoid caching of postprocessing collection
to always get fresh lists of documents. This is necessary since the
postprocessing changes the same documents which the
postprocessing-collection query selects.
2014-11-14 16:34:55 +01:00
Michael Peter Christen
68e8039fd1 added high-precision scheduler for API processes. This allows also to
make the execution in dependency of available RAM or CPU load. The
default value for CPU load is 4.0 and the check runs once a minute.
2014-11-14 10:02:50 +01:00
Michael Peter Christen
0a879c98e7 added new 'firstSeen' database table and necessary data structures which
hold a date for each URL to record when a url was first seen. This is
then used to overwrite the modification date for urls upon recrawl in
case that the first-seen date is before the latest document date. This
behaviour is necessary due to the common behaviour of content management
systems which attach always the current date to all documents. Using the
firstSeen database it is possible to approximate a real first document
creation date in case that the crawler starts frequently for the same
domain. As a result the search results ordered by date have a much
better quality and the usage of YaCy as search agent for latest news has
a better quality.
2014-11-13 00:58:58 +01:00
Michael Peter Christen
487a733c99 fix for catchall handling in search 2014-11-12 22:48:33 +01:00
sixcooler
33b0234454 added a input-field for setting 'fileHost'
Set this to avoid error-messages like 'proxy use not allowed / granted'
on accessing your Peer by its hostname.
2014-11-12 21:32:34 +01:00
Michael Peter Christen
1db476c67e fix for bad table iteration 2014-11-10 18:52:01 +01:00
Michael Peter Christen
e05b7332b9 html fix 2014-11-10 02:18:44 +01:00
reger
c1ad265efd remove not used accordion javascript call for facet navs 2014-11-09 22:06:00 +01:00
Michael Peter Christen
ecdfb35f09 added long variables to debug output in index browser 2014-11-07 18:12:09 +01:00
Michael Peter Christen
95d87f00b3 fix for bad query generation in doublecheck in postprocessing 2014-11-07 18:11:23 +01:00
orbiter
a2b5cfb3cf added reverse button to tables, by default on now (to see latest entries
first)
2014-11-02 20:30:49 +01:00
orbiter
fceac5d2d4 added (missing) Tables_p.xml for table xml api 2014-11-02 20:10:32 +01:00
orbiter
dbafd4865e enhanced debug code in host browser 2014-10-30 15:47:44 +01:00