First update of german language file + corresponding changes in HTML-file. Many will follow...

git-svn-id: https://svn.berlios.de/svnroot/repos/yacy/trunk@349 6c8d7289-2bf4-0310-a012-ef5d649a1542
This commit is contained in:
rramthun 2005-06-30 18:26:10 +00:00
parent bef3aaec38
commit 35c7a5883b
2 changed files with 13 additions and 8 deletions

View File

@ -32,7 +32,7 @@
<p>
<div class=small id="startCrawling"><b>Start Crawling Job:</b>&nbsp;
You can define url's as start points for Web page crawling and start that crawling here.
You can define URLs as start points for Web page crawling and start that crawling here.
</div>
<table border="0" cellpadding="5" cellspacing="0" width="100%">
<form action="IndexCreate_p.html" method="post" enctype="multipart/form-data">
@ -87,7 +87,7 @@ You can define url's as start points for Web page crawling and start that crawli
</td>
</tr>
<tr valign="top" class="TableCellDark">
<td class=small>Do Remote Indexing</td>
<td class=small>Do Remote Indexing:</td>
<td class=small><input type="checkbox" name="crawlOrder" align="top" #(crawlOrderChecked)#::checked#(/crawlOrderChecked)#></td>
<td class=small colspan="3">
If checked, the crawl will try to assign the leaf nodes of the search tree to remote peers.

View File

@ -1,6 +1,6 @@
#English=German
<!-- lang -->default\(english\)=Deutsch
<!-- author -->=Übersetzer eintragen
<!-- author -->=Roland Ramthun
#-----------------------------------------------------------
#index.html
@ -16,20 +16,25 @@ order by:=sortiert nach:
>Date-Quality=>Datum-Qualit&auml;t
Resource:=Quelle:
Max. search time \(seconds\)=Max. Suchzeit (Sekunden)
"URL mask":=URL-Maske
URL mask:=URL-Maske:
#-------------------------------------------------------
#IndexCreate_p.html
Index Creation=Index Erzeugung
You can define url's as start points for Web page crawling and start that crawling here.=Du kannst hier URLs angeben, die gecrawlt werden sollen und dann das Crawling starten.
You can define URLs as start points for Web page crawling and start that crawling here.=Du kannst hier URLs angeben, die gecrawlt werden sollen und dann das Crawling starten.
Crawling Depth:=Crawl-Tiefe:
Crawling Filter:=Crawl-Maske:
<fehlt>
Store to Proxy Cache:=In den Proxy-Cache speichern:
Do Local Indexing:=Lokales Indexieren:
Do Remote Indexing:=Indexieren auf anderen Peers:
Exclude static Stop-Words:=Stop-Words ausschließen:
Start Point:=Startpunkt:
#-------------------------------------------------------
#Status.html
#Das passt hier, aber wenn wo anders Protection steht eher nicht..,
Protection=Sicherheit
Welcome to YaCy!=Willkommen bei YaCy!
System version=Systemversion
Proxy host=Proxy Host