yacy_search_server/locales/cn.lng

3656 lines
193 KiB
Plaintext
Raw Blame History

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

# cn.lng
# English-->Chinese
# -----------------------
# This is a part of YaCy, a peer-to-peer based web search engine
#
# (C) by Michael Peter Christen; mc@anomic.de
# first published on http://www.anomic.de
# Frankfurt, Germany, 2005
#
#
# This file is maintained by lofyer<lofyer@gmail.com>
# This file is written by lofyer
# If you find any mistakes or untranslated strings in this file please don't hesitate to email them to the maintainer.
#File: ConfigLanguage_p.html
#---------------------------
# Only part 1.
# Contributors are in chronological order, not how much they did absolutely.
# Thank you for your help!
<!-- lang -->default\(english\)==Chinese
<!-- author -->==lofyer
<!-- maintainer -->==lofyer@gmail.com
#-----------------------------
#File: AccessTracker_p.html
#---------------------------
Access Tracker==访问跟踪
Server Access Overview==网站访问概况
This is a list of \#\[num\]\# requests to the local http server within the last hour.==最近一小时内有 #[num]# 个到本地的访问请求.
This is a list of requests to the local http server within the last hour.==此列表显示最近一小时内到本机的访问请求.
Showing \#\[num\]\# requests.==显示 #[num]# 个请求.
#>Host<==>Host<
>Path<==>路径<
Date<==日期<
Access Count During==访问时间
last Second==最近1 秒
last Minute==最近1 分
last 10 Minutes==最近10 分
last Hour==最近1 小时
The following hosts are registered as source for brute-force requests to protected pages==以下主机作为保护页面强制请求的源
#>Host==>Host
Access Times==访问时间
Server Access Details==服务器访问细节
Local Search Log==本地搜索日志
Local Search Host Tracker==本地搜索主机跟踪
Remote Search Log==远端搜索日志
#Total:==Total:
Success:==成功:
Remote Search Host Tracker==远端搜索跟踪
This is a list of searches that had been requested from this\' peer search interface==此列表显示从远端peer所进行的搜索
Showing \#\[num\]\# entries from a total of \#\[total\]\# requests.==显示 #[num]# 条目,共 #[total]# 个请求.
Requesting Host==请求主机
Offset==偏移量
Expected Results==期望结果
Returned Results==返回结果
Used Time \(ms\)==消耗时间(毫秒)
URL fetch \(ms\)==获取URL(毫秒)
Snippet comp \(ms\)==片段比较(毫秒)
Query==查询字符
#>User Agent<==>User Agent<
Search Word Hashes==搜索字哈希值
Count</td>==计数</td>
Queries Per Last Hour==小时平均查询
Access Dates==访问日期
This is a list of searches that had been requested from remote peer search interface==此列表显示从远端peer所进行的搜索.
#-----------------------------
#File: Blacklist_p.html
#---------------------------
Blacklist Administration==黑名单管理
Used Blacklist engine:==使用的黑名单引擎:
This function provides an URL filter to the proxy; any blacklisted URL is blocked==提供代理URL过滤;过滤掉自载入时加入进黑名单的URL.
from being loaded. You can define several blacklists and activate them separately.==您可以自定义黑名单并分别激活它们.
You may also provide your blacklist to other peers by sharing them; in return you may==您也可以提供你自己的黑名单列表给其他人;
collect blacklist entries from other peers.==同样,其他人也能将黑名单列表共享给您.
Active list:==激活列表:
No blacklist selected==未选中黑名单
Select list:==选中黑名单:
not shared::shared==未共享::已共享
"select"=="选择"
Create new list:==创建:
"create"=="创建"
Settings for this list==设置
"Save"=="保存"
Share/don't share this list==共享/不共享此名单
Delete this list==删除
Edit this list==编辑
These are the domain name/path patterns in==这些域名/路径规则来自
Blacklist Pattern==黑名单规则
Edit selected pattern\(s\)==编辑选中规则
Delete selected pattern\(s\)==删除选中规则
Move selected pattern\(s\) to==移动选中规则
#You can select them here for deletion==您可以从这里选择要删除的项
Add new pattern:==添加新规则:
"Add URL pattern"=="添加URL规则"
The right \'\*\', after the \'\/\', can be replaced by a <a href=\"http://java.sun.com/j2se/1.5.0/docs/api/java/util/regex/Pattern.html\">regex</a>.== 在 '/' 后边的 '*' ,可用<a href="http://java.sun.com/j2se/1.5.0/docs/api/java/util/regex/Pattern.html">正则表达式</a>表示.
domain.net\/fullpath<==domain.net/绝对路径<
>domain.net\/\*<==>domain.net/*<
\*.domain.net\/\*<==*.domain.net/*<
\*.sub.domain.net\/\*<==*.sub.domain.net/*<
#sub.domain.\*\/\*<==sub.domain.*/*<
#domain.\*\/\*<==domain.*/*<
a complete <a href=\"http://java.sun.com/j2se/1.5.0/docs/api/java/util/regex/Pattern.html\">regex</a> \(slow\)==一个完整的<a href="http://java.sun.com/j2se/1.5.0/docs/api/java/util/regex/Pattern.html">正则表达式</a> (慢)
#was removed from blacklist==wurde aus Blacklist entfernt
#was added to the blacklist==wurde zur Blacklist hinzugefügt
Activate this list for==为以下条目激活此名单
Show entries:==显示条目:
Entries per page:==页面条目:
"Go"=="Go"
Edit existing pattern\(s\):==编辑现有规则:
"Save URL pattern\(s\)"=="保存URL规则"
#-----------------------------
#File: BlacklistCleaner_p.html
#---------------------------
Blacklist Cleaner==黑名单整理
Here you can remove or edit illegal or double blacklist-entries.==在这里您可以删除或者编辑一个非法或者重复的黑名单条目.
Check list==校验名单
"Check"=="校验"
Allow regular expressions in host part of blacklist entries.==允许黑名单中主机部分的正则表达式.
The blacklist-cleaner only works for the following blacklist-engines up to now:==此整理目前只对以下黑名单引擎有效:
Illegal Entries in \#\[blList\]\# for==非法条目在 #[blList]#
Deleted \#\[delCount\]\# entries==已删除 #[delCount]# 个条目
Altered \#\[alterCount\]\# entries!==已修改 #[alterCount]# 个条目
Two wildcards in host-part==主机部分中的两个通配符
Either subdomain <u>or</u> wildcard==子域名<u>或者</u>通配符
Path is invalid Regex==无效正则表达式
Wildcard not on begin or end==通配符未在开头或者结尾处
Host contains illegal chars==主机名包含非法字符
Double==重复
"Change Selected"=="改变选中"
"Delete Selected"=="删除选中"
No Blacklist selected==未选中黑名单
#-----------------------------
#File: BlacklistImpExp_p.html
#---------------------------
#Blacklist Import==Blacklist Import
Used Blacklist engine:==使用的黑名单引擎:
Import blacklist items from...==导入黑名单条目从...
other YaCy peers:==其他的YaCy Peers:
"Load new blacklist items"=="载入黑名单条目"
#URL:==URL:
plain text file:<==文本文件:<
XML file:==XML文件:
Upload a regular text file which contains one blacklist entry per line.==上传一个每行都有一个黑名单条目的文本文件.
Upload an XML file which contains one or more blacklists.==上传一个包含一个或多个黑名单的XML文件.
Export blacklist items to...==导出黑名单到...
Here you can export a blacklist as an XML file. This file will contain additional==您可以导出黑名单到一个XML文件中此文件含有
information about which cases a blacklist is activated for.==激活黑名单所具备条件的详细信息.
"Export list as XML"=="导出名单到XML"
Here you can export a blacklist as a regular text file with one blacklist entry per line.==您可以导出黑名单到一个文本文件中,且每行都仅有一个黑名单条目.
This file will not contain any additional information==此文件不会包含详细信息
"Export list as text"=="导出名单到文本"
#-----------------------------
#File: BlacklistTest_p.html
#---------------------------
Blacklist Test==黑名单测试
Used Blacklist engine:==使用的黑名单引擎:
Test list:==测试黑名单:
"Test"=="测试"
The tested URL was==此链接
It is blocked for the following cases:==由于以下原因,此名单无效:
#Crawling==Crawling
#DHT==DHT
News==新闻
Proxy==代理
Search==搜索
Surftips==建议
#-----------------------------
#File: Blog.html
#---------------------------
by==by
Comments</a>==评论</a>
>edit==>编辑
>delete==>删除
Edit<==编辑<
previous entries==前一个条目
next entries==下一个条目
new entry==新条目
import XML-File==导入XML文件
export as XML==导出到XML文件
Comments</a>==评论</a>
Blog-Home==博客主页
Author:==作者:
Subject:==标题:
#Text:==Text:
You can use==您可以用
Yacy-Wiki Code==YaCy-Wiki 代码
here.==.
Comments:==评论:
deactivated==无效
>activated==>有效
moderated==改变
"Submit"=="提交"
"Preview"=="预览"
"Discard"=="取消"
>Preview==>预览
No changes have been submitted so far!==未作出任何改变!
Access denied==拒绝访问
To edit or create blog-entries you need to be logged in as Admin or User who has Blog rights.==如果编辑或者创建博客内容,您需要登录.
Are you sure==确定
that you want to delete==要删除:
Confirm deletion==确定删除
Yes, delete it.==是, 删除.
No, leave it.==不, 保留.
Import was successful!==导入成功!
Import failed, maybe the supplied file was no valid blog-backup?==导入失败, 可能提供的文件不是有效的博客备份?
Please select the XML-file you want to import:==请选择您想导入的XML文件:
#-----------------------------
#File: BlogComments.html
#---------------------------
by==by
Comments</a>==评论</a>
Login==登录
Blog-Home==博客主页
delete</a>==删除</a>
allow</a>==允许</a>
Author:==作者:
Subject:==标题:
#Text:==Text:
You can use==您可以用
Yacy-Wiki Code==YaCy-Wiki 代码
here.==在这里.
"Submit"=="提交"
"Preview"=="预览"
"Discard"=="取消"
#-----------------------------
#File: Bookmarks.html
#---------------------------
YaCy \'\#\[clientname\]\#\': Bookmarks==YaCy '#[clientname]#': 书签
The bookmarks list can also be retrieved as RSS feed. This can also be done when you select a specific tag.==书签列表也能用作RSS订阅.当您选择某个标签时您也可执行这个操作.
Click the API icon to load the RSS from the current selection.==点击API图标以从当前选择书签中载入RSS.
To see a list of all APIs, please visit the <a href="http://www.yacy-websuche.de/wiki/index.php/Dev:API">API wiki page</a>.==获取所有API, 请访问<a href="http://www.yacy-websuche.de/wiki/index.php/Dev:API">API Wiki</a>.
<h3>Bookmarks==<h3>书签
Bookmarks \(==书签\(
#Login==登录
List Bookmarks==显示书签
Add Bookmark==添加书签
Import Bookmarks==导入书签
Import XML Bookmarks==导入XML书签
Import HTML Bookmarks==导入HTML书签
"import"=="导入"
Default Tags:==默认标签
imported==已导入
#Edit Bookmark==编辑书签
#URL:==URL:
Title:==标题:
Description:==描述:
Folder \(/folder/subfolder\):==目录(/目录/子目录):
Tags \(comma separated\):==标签(以逗号隔开):
>Public:==>公共的:
yes==是
no==否
Bookmark is a newsfeed==书签是新闻订阅点
"create"=="创建"
"edit"=="编辑"
File:==文件:
import as Public==导入为公有
"private bookmark"=="私有书签"
"public bookmark"=="公共书签"
Tagged with==关键词:
'Confirm deletion'=='确认删除'
Edit==编辑
Delete==删除
Folders==目录
Bookmark Folder==书签目录
#Tags==标签
Bookmark List==书签列表
previous page==前一页
next page==后一页
All==所有
Show==显示
Bookmarks per page.==每页书签.
#unsorted==默认排序
#-----------------------------
#File: Collage.html
#---------------------------
Image Collage==图像拼贴
Private Queue==私有
Public Queue==公共
#-----------------------------
#File: compare_yacy.html
#---------------------------
Websearch Comparison==网页搜索对比
Left Search Engine==左侧引擎
Right Search Engine==右侧引擎
Query==查询
"Compare"=="比较"
Search Result==结果
#-----------------------------
#File: ConfigAccounts_p.html
#---------------------------
User Accounts==用户账户
User Administration==用户管理
User created:==用户已创建:
User changed:==用户已改变:
Generic error.==一般错误.
Passwords do not match.==密码不匹配.
Username too short. Username must be \>\= 4 Characters.==用户名太短, 至少为4个字符.
No password is set for the administration account.==管理员账户未设置密码.
Please define a password for the admin account.==请设置一个管理员密码.
Admin Account==管理员
Access from localhost without account==本地匿名访问
Access to your peer from your own computer \(localhost access\) is granted. No need to configure an administration account.==您可以从自己的电脑访问(本地访问). 不需要设置管理员账户.
Access only with qualified account==只允许授权用户访问
You need this only if you want a remote access to your peer.==如果您需要能从远端访问到本地peer的授权, 您可以设置此项.
Peer User:==Peer用户:
New Peer Password:==新Peer密码:
Repeat Peer Password:==重复Peer密码:
"Define Administrator"=="设置管理员账户"
Select user==选择用户
New user==新用户
Edit User==编辑用户
Delete User==删除用户
Edit current user:==编辑当前用户:
Username</label>==用户名</label>
Password</label>==密码</label>
Repeat password==重复密码
First name==名
Last name==姓
Address==地址
Rights==权限
Timelimit==时限
Time used==已用时
Save User==保存用户
#-----------------------------
#File: ConfigAppearance_p.html
#---------------------------
Appearance and Integration==外观界面
You can change the appearance of the YaCy interface with skins.==您可以在这里修改YaCy的外观界面.
#You can change the appearance of YaCy with skins==Sie können hier das Erscheinungsbild von YaCy mit Skins ändern
The selected skin and language also affects the appearance of the search page.==选择的皮肤和语言也会影响到搜索页面的外观.
If you <a href="ConfigPortal.html">create a search portal with YaCy</a> then you can==如果您<a href="ConfigPortal.html">创建YaCy门户</a>,
change the appearance of the search page here.==那么您能在<a href="ConfigPortal.html">这里</a> 改变搜索页面的外观.
#and the default icons and links on the search page can be replaced with you own.==und die standard Grafiken und Links auf der Suchseite durch Ihre eigenen ersetzen.
Skin Selection==选择皮肤
Select one of the default skins, download new skins, or create your own skin.==选择一个默认皮肤, 下载新皮肤或者创建属于您自己的皮肤.
Current skin==当前皮肤
Available Skins==可用皮肤
"Use"=="使用"
"Delete"=="删除"
>Skin Color Definition<==>改变皮肤颜色<
The generic skin \'generic_pd\' can be configured here with custom colors:==能在这里修改皮肤'generic_pd'的颜色:
>Background<==>背景<
>Text<==>文本<
>Legend<==>说明<
>Table&nbsp;Header<==>标签&nbsp;头部<
>Table&nbsp;Item<==>标签&nbsp;条目&nbsp;1<
>Table&nbsp;Item&nbsp;2<==>标签&nbsp;条目&nbsp;2<
>Table&nbsp;Bottom<==>标签&nbsp;底部<
>Border&nbsp;Line<==>边界&nbsp;线<
>Sign&nbsp;\'bad\'<==>符号&nbsp;'坏'<
>Sign&nbsp;\'good\'<==>符号&nbsp;'好'<
>Sign&nbsp;\'other\'<==>符号&nbsp;'其他'<
>Search&nbsp;Headline<==>搜索页面&nbsp;标题<
>Search&nbsp;URL==>搜索页面&nbsp;链接<
"Set Colors"=="设置颜色"
#>Skin Download<==>下载皮肤<
Skins can be installed from download locations==安装下载皮肤
Install new skin from URL==从URL安装皮肤
Use this skin==使用这个皮肤
"Install"=="安装"
Make sure that you only download data from trustworthy sources. The new Skin file==确保您的皮肤文件是从可靠源获得. 如果存在相同文件
might overwrite existing data if a file of the same name exists already.==, 新皮肤会覆盖旧的.
>Unable to get URL:==>无法打开链接:
Error saving the skin.==保存皮肤时出错.
#-----------------------------
#File: ConfigBasic.html
#---------------------------
Access Configuration==访问设置
Basic Configuration==基本设置
Your YaCy Peer needs some basic information to operate properly==您的YaCy Peer需要一些基本信息才能工作
Select a language for the interface==选择界面语言
Use Case: what do you want to do with YaCy:==用途: 您用YaCy做什么:
Community-based web search==基于社区的网络搜索
Join and support the global network \'freeworld\', search the web with an uncensored user-owned search network==加入并支持全球网络 'freeworld', 自由地搜索.
Search portal for your own web pages==属于您自己的搜索引擎
Your YaCy installation behaves independently from other peers and you define your own web index by starting your own web crawl. This can be used to search your own web pages or to define a topic-oriented search portal.==本机YaCy的peer创建与索引过程独立于其他peer, 即您可以定义自己的搜索偏向.
Files may also be shared with the YaCy server, assign a path here:==您也能与YaCy服务器共享内容, 在这里指定路径:
This path can be accessed at ==可以通过以下链接访问
Use that path as crawl start point.==将此路径作为索引起点.
Intranet Indexing==局域网索引
Create a search portal for your intranet or web pages or your \(shared\) file system.==创建您自己的局域网, 网页或者您共享的文件系统.
URLs may be used with http/https/ftp and a local domain name or IP, or with an URL of the form==适合http/https/ftp协议的链接/主机名/IP
or smb:==或者服务器信息块(SMB):
Your peer name has not been customized; please set your own peer name==您的peer尚未命名, 请命名它
You may change your peer name==您可以改变您的peer名称
Peer Name:==Peer名称:
Your peer cannot be reached from outside==外部将不能访问您的peer
which is not fatal, but would be good for the YaCy network==此举有利于YaCy网络
please open your firewall for this port and/or set a virtual server option in your router to allow connections on this port==请改变您的防火墙或者虚拟机路由设置, 从而外网能访问这个端口
Your peer can be reached by other peers==外部将能访问您的peer
Peer Port:==Peer端口:
Configure your router for YaCy:==设置本机路由:
Configuration was not successful. This may take a moment.==配置失败. 这需要花费一些时间.
Set Configuration==保存设置
What you should do next:==下一步您该做的:
Your basic configuration is complete! You can now \(for example\)==配置成功, 您现在可以
just <==开始<
start an uncensored search==自由地搜索了
start your own crawl</a> and contribute to the global index, or create your own private web index==开始您的索引, 并将其贡献给全球索引, 或者创建一个您自己的私有搜索网页
set a personal peer profile</a> \(optional settings\)==设置私有peer</a> (可选项)
monitor at the network page</a> what the other peers are doing==监视网络页面</a>, 以及其他peer的活动
Your Peer name is a default name; please set an individual peer name.==您的peer名称为系统默认, 请设置另外一个名称.
You did not set a user name and/or a password.==您未设置用户名和/或密码.
Some pages are protected by passwords.==一些页面受密码保护.
You should set a password at the <a href="ConfigAccounts_p.html">Accounts Menu</a> to secure your YaCy peer.</p>::==您可以在 <a href="ConfigAccounts_p.html">账户菜单</a> 设置密码, 从而加强您的YaCy peer安全性.</p>::
You did not open a port in your firewall or your router does not forward the server port to your peer.==您未打开防火墙端口或者您的路由器未能与主机的服务端口建立链接.
This is needed if you want to fully participate in the YaCy network.==如果您要完全加入YaCy网络, 此项是必须的.
You can also use your peer without opening it, but this is not recomended.==不开放您的peer您也能使用, 但是不推荐.
#-----------------------------
#File: ConfigHeuristics_p.html
#---------------------------
Heuristics Configuration==启发式配置
A <a href=\"http://en.wikipedia.org/wiki/Heuristic\">heuristic</a> is an \'experience-based technique that help in problem solving, learning and discovery\' \(wikipedia\).==<a href="http://de.wikipedia.org/wiki/Heuristik">启发式</a> '是一个依赖于经验来解决问题, 学习与发现问题的过程.' (Wikipedia).
The search heuristics that can be switched on here are techniques that help the discovery of possible search results based on link guessing, in-search crawling and requests to other search engines.==
您可以在这里开启启发式搜索, 通过猜测链接, 嵌套搜索和访问其他搜索引擎, 从而找到更多符合您期望的结果.
When a search heuristic is used, the resulting links are not used directly as search result but the loaded pages are indexed and stored like other content.==开启启发式搜索时, 搜索结果给出的链接并不是直接搜索的链接, 而是已经缓存在其他服务器上的结果.
This ensures that blacklists can be used and that the searched word actually appears on the page that was discovered by the heuristic.==这保证了黑名单的有效性, 并且搜索关键字是通过启发式搜索找到的.
The success of heuristics are marked with an image==启发式搜索找到的结果会被特定图标标记
heuristic:&lt;name&gt;==启发式:&lt;名称&gt;
#\(redundant\)==(redundant)
\(new link\)==(新链接)
below the favicon left from the search result entry:==搜索结果中使用的图标:
The search result was discovered by a heuristic, but the link was already known by YaCy==搜索结果通过启发式搜索, 且链接已知
The search result was discovered by a heuristic, not previously known by YaCy==搜索结果通过启发式搜索, 且链接未知
\'site\'-operator: instant shallow crawl=='站点'-操作符: 即时浅抓取
When a search is made using a \'site\'-operator \(like: \'download site:yacy.net\'\) then the host of the site-operator is instantly crawled with a host-restricted depth-1 crawl.==当使用'站点'-操作符搜索时(比如: 'download site:yacy.net') ,主机就会立即抓取层数为 最大限制深度-1 的内容.
That means: right after the search request the portal page of the host is loaded and every page that is linked on this page that points to a page on the same host.==意即: 在链接请求发出后, 搜索引擎就会载入在同一主机中每一个与此页面相连的网页.
Because this \'instant crawl\' must obey the robots.txt and a minimum access time for two consecutive pages, this heuristic is rather slow, but may discover all wanted search results using a second search \(after a small pause of some seconds\).==因为'立即抓取'依赖于robots.txt和两个相连页面的最小访问时间, 所以这个启发式选项会相当慢, 但是在第二次搜索时会搜索到更多条目(需要间隔几秒钟).
scroogle: load external search result list==scroogle: 载入外部搜索引擎结果
When using this heuristic, then every search request line is used for a call to scroogle.==开启这个选项时, 每一次搜索都会引入scroogle的结果.
20 results are taken from scroogle and loaded simultanously, parsed and indexed immediately.==同时读取并索引从scroogle获得的20个结果.
#-----------------------------
#File: ConfigHTCache_p.html
#---------------------------
Hypertext Cache Configuration==HTCache配置
The HTCache stores content retrieved by the HTTP and FTP protocol. Documents from smb:// and file:// locations are not cached.==HTCache存储着从HTTP和FTP协议获得的内容. 其中从smb:// 和 file:// 取得的内容不会被缓存.
The cache is a rotating cache: if it is full, then the oldest entries are deleted and new one can fill the space.==此缓存是队列式的: 队列满时, 会删除旧内容, 从而加入新内容.
HTCache Configuration==HTCache配置
The path where the cache is stored==缓存存储路径
The current size of the cache==当前缓存大小
The maximum size of the cache==缓存最大尺寸
"Set"=="设置"
Cleanup==清除
Cache Deletion==删除缓存
Delete HTTP &amp; FTP Cache==删除HTTP &amp; FTP 缓存
Delete robots.txt Cache==删除robots.txt 缓存
"Delete"=="删除"
#-----------------------------
#File: ConfigLanguage_p.html
#---------------------------
Language selection==语言选择
You can change the language of the YaCy-webinterface with translation files.==您可以使用翻译文件来改变YaCy操作界面的语言.
Current language</label>==当前语言</label>
#default\(english\)==Deutsch
Author\(s\) \(chronological\)</label>==作者(按时间排序)</label>
Send additions to maintainer</em>==向维护者提交补丁</em>
Available Languages</label>==可用语言</label>
Install new language from URL==从URL安装新语言
Use this language==使用此语言
"Use"=="使用"
"Delete"=="删除"
"Install"=="安装"
Unable to get URL:==打开链接失败:
Error saving the language file.==保存语言文件时发生错误.
Make sure that you only download data from trustworthy sources. The new language file==确保您的数据是从可靠源下载. 如果存在相同文件名
might overwrite existing data if a file of the same name exists already.==, 旧文件将被覆盖.
#-----------------------------
#File: ConfigLiveSearch.html
#---------------------------
Integration of a Search Field for Live Search==搜索栏集成: 即时搜索
A \'Live-Search\' input field that reacts as search-as-you-type in a pop-up window can easily be integrated in any web page=='即时搜索'输入栏: 即当您在搜索栏键入关键字时, 会在网页中弹出搜索对话框按钮
This is the same function as can be seen on all pages of the YaCy online-interface \(look at the window in the upper right corner\)==当您在线使用YaCy时, 您会在搜索页面看到相应功能(页面右上角)
Just use the code snippet below to integrate that in your own web pages==将以下代码添加到您的网页中
Please check if the address, as given in the example \'\#\[ip\]\#\:\#\[port\]\#\' here is correct and replace it with more appropriate values if necessary==对于形如 '#[ip]#:#[port]#' 的地址, 请用具体值来替换
Code Snippet:==代码:
YaCy Portal Search==YaCy门户搜索
"Search"=="搜索"
Configuration options and defaults for \'yconf\':==配置设置和默认的'yconf':
Defaults<==默认<
url<==URL<
is a mandatory property - no default<==固有参数 - 非默认<
YaCy P2P Web Search==YaCy P2P 网页搜索
Size and position \(width \| height \| position\)==尺寸和位置(宽度 | 高度 | 位置)
Specifies where the dialog should be displayed. Possible values for position: \'center\', \'left\', \'right\', \'top\', \'bottom\', or an array containing a coordinate pair \(in pixel offset from top left of viewport\) or the possible string values \(e.g. \[\'right\',\'top\'\] for top right corner\)==指定对话框位置. 对于位置: 'center', 'left', 'right', 'top', 'bottom' 的值, 或者一个包含对应位置值的数组 (以左上角为参考位置的像素数), 或者字符串值 (e.g. ['right','top'] 对应右上角)
Animation effects \(show | hide\)==动画效果 (显示 | 隐藏)
The effect to be used. Possible values: \'blind\', \'clip\', \'drop\', \'explode\', \'fold\', \'puff\', \'slide\', \'scale\', \'size\', \'pulsate\'.==
可用特效: 'blind', 'clip', 'drop', 'explode', 'fold', 'puff', 'slide', 'scale', 'size', 'pulsate'.
Interaction \(modal \| resizable\)==对话框 (modal | 可变)
If modal is set to true, the dialog will have modal behavior; other items on the page will be disabled \(i.e. cannot be interacted with\).==如果选中modal属性, 则对话框会有modal行为; 否则页面上就不具有此特性. (即不能进行交互操作).
Modal dialogs create an overlay below the dialog but above other page elements.==Modal对话框会在页面元素下面而不是其上创建覆盖层.
If resizable is set to true, the dialog will be resizeable.==如果选中可变属性, 对话框大小就是可变的.
Load JavaScript load_js==载入页面JavaScript
If load_js is set to false, you have to manually load the needed JavaScript on your portal page.==如果未选中载入页面JavaScript, 那么您可能需要手动加载页面里的JavaScript.
This can help to avoid timing problems or double loading.==这有助于避免分时或者重载问题.
Load Stylesheets load_css==载入页面样式
If load_css is set to false, you have to manually load the needed CSS on your portal page.==如果未选中载入页面样式, 您需要手动加载页面里的CSS文件.
#Themes==Themes
You can <==您能够<
download</a> ready made themes or <a href=\"http://jqueryui.com/themeroller/\" target=\"_blank\">create</a>==下载</a>或者<a href="http://jqueryui.com/themeroller/" target="_blank">创建</a>
your own custom theme. <br/>Themes are installed into: DATA/HTDOCS/yacy/ui/css/themes/==一个您自己的主题. <br/>主题文件安装在: DATA/HTDOCS/yacy/ui/css/themes/
#-----------------------------
#File: ConfigNetwork_p.html
#---------------------------
Network Configuration==网络设置
No changes were made!==未作出任何改变!
Accepted Changes==应用设置
Inapplicable Setting Combination==设置未被应用
#P2P operation can run without remote indexing, but runs better with remote indexing switched on. Please switch 'Accept Remote Crawl Requests' on==P2P-Tätigkeit läuft ohne Remote-Indexierung, aber funktioniert besser, wenn diese eingeschaltet ist. Bitte aktivieren Sie 'Remote Crawling akzeptieren'
For P2P operation, at least DHT distribution or DHT receive \(or both\) must be set. You have thus defined a Robinson configuration==对于P2P操作, 需要配置DHT分布网络或者DHT设备(或都要配置). 因此您需要定义一个Robinson配置.
Global Search in P2P configuration is only allowed, if index receive is switched on. You have a P2P configuration, but are not allowed to search other peers.==仅当接收索引选项打开时, 才能进行P2P全球搜索.
For Robinson Mode, index distribution and receive is switched off==在Robinson模式中, 索引分发和接收是默认关闭的.
#This Robinson Mode switches remote indexing on, but limits targets to peers within the same cluster. Remote indexing requests from peers within the same cluster are accepted==Dieser Robinson-Modus aktiviert Remote-Indexierung, aber beschränkt die Anfragen auf Peers des selben Clusters. Nur Remote-Indexierungsanfragen von Peers des selben Clusters werden akzeptiert
#This Robinson Mode does not allow any remote indexing \(neither requests remote indexing, nor accepts it\)==Dieser Robinson-Modus erlaubt keinerlei Remote-Indexierung (es wird weder Remote-Indexierung angefragt, noch akzeptiert)
Network and Domain Specification==指定网络和域.
# With this configuration it is not allowed to authentify automatically from localhost!==Diese Konfiguration erlaubt keine automatische Authentifikation von localhost!
# Please open the <a href=\"ConfigAccounts_p.html\">Account Configuration</a> and set a new password.==Bitte in der <a href="ConfigAccounts_p.html">Benutzerverwaltung</a> ein neues Passwort festlegen.
YaCy can operate a computing grid of YaCy peers or as a stand-alone node.==您可以操作由YaCy peer组成的计算网格或者一个单独节点.
To control that all participants within a web indexing domain have access to the same domain,==进行索引的域需要具有访问权限才能控制相同域内的所有成员,
this network definition must be equal to all members of the same YaCy network.==且此设置对同一YaCy网络内的所有成员有效.
Network Definition==网络定义
Network Nick==网络别名
Long Description==描述
Indexing Domain==索引域
#DHT==DHT
"Change Network"=="改变网络"
Distributed Computing Network for Domain==域内分布式计算网络.
You can configure if you want to participate at the global YaCy network or if you want to have your==如果要加入YaCy全球网络或者仅仅
own separate search cluster with or without connection to the global network. You may also define==作为一个独立的搜索cluster, 请配置此项.
a completely independent search engine instance, without any data exchange between your peer and other==您也可以不与其他peer有任何数据交换, 作一个完全独立的搜索引擎.
peers, which we call a 'Robinson' peer.==对于这种配置的peer, 即叫做Robinson peer.
Peer-to-Peer Mode==点对点模式
>Index Distribution==>索引分发
This enables automated, DHT-ruled Index Transmission to other peers==自动向其他peer传递DHT规则的索引
>enabled==>开启
disabled during crawling==在crawl时关闭
disabled during indexing==在索引时关闭
>Index Receive==>接收索引
Accept remote Index Transmissions==接受远程索引传递
This works only if you have a senior peer. The DHT-rules do not work without this function==仅当您拥有更上级peer时有效. 如果未设置此项, DHT规则不生效
>reject==>拒绝
accept transmitted URLs that match your blacklist==接受符合黑名单的URL
#>Accept Remote Crawl Requests==>Remotecrawl-Anfragen akzeptieren
#Perform web indexing upon request of another peer==Führe Indexierung bei Anfrage eines anderen Peers aus
#This works only if you are a senior peer==Dies funktioniert nur, wenn Sie ein Senior-Peer sind
#Load with a maximum of==Lade mit maximal
#pages per minute==Seiten pro Minute (PPM)
>Robinson Mode==>Robinson模式
If your peer runs in 'Robinson Mode' you run YaCy as a search engine for your own search portal without data exchange to other peers==如果您的peer运行在'Robinson模式', 您能在不与其他peer交换数据的情况下进行搜索
There is no index receive and no index distribution between your peer and any other peer==您不会与其他peer进行索引传递
In case of Robinson-clustering there can be acceptance of remote crawl requests from peers of that cluster==对于Robinson模式的cluster 一样会应答远端的crawl请求
>Private Peer==>私有Peer
Your search engine will not contact any other peer, and will reject every request==您的搜索引擎不会与其他peer联系, 并会拒绝每一个外部请求
#>Private Cluster==>Privater Cluster
#Your peer is part of a private cluster without public visibility
#Index data is not distributed, but remote crawl requests are distributed and accepted from your cluster
#Search requests are spread over all peers of the cluster, and answered from all peers of the cluster
#List of ip:port - addresses of the cluster: \(comma-separated\)
>Public Cluster==>公共Cluster
Your peer is part of a public cluster within the YaCy network==您的peer属于YaCy网络内的一个公共cluster
Index data is not distributed, but remote crawl requests are distributed and accepted==索引数据不会被分发, 但是外部的crawl请求会被分发和接受
Search requests are spread over all peers of the cluster, and answered from all peers of the cluster==搜索请求在当前cluster内的所有peer中传播, 并且这些peer同样会作出回应
List of .yacy or .yacyh - domains of the cluster: \(comma-separated\)==Cluster内 .yacy 或者 .yacyh 的域名列表 : (以逗号隔开)
>Public Peer==>公共Peer
You are visible to other peers and contact them to distribute your presence==对于其他peer您是可见的, 可以与他们进行通信以分发你的索引
Your peer does not accept any outside index data, but responds on all remote search requests==您的peer不接受任何外部索引数据, 但是会回应所有外部搜索请求
#>Peer Tags==>Peer Tags
When you allow access from the YaCy network, your data is recognized using keywords==当您允许YaCy网络的访问时, 您的数据会以关键字形式表示
Please describe your search portal with some keywords \(comma-separated\)==请用关键字描述您的搜索门户 (以逗号隔开)
If you leave the field empty, no peer asks your peer. If you fill in a \'\*\', your peer is always asked.==如果此部分留空, 那么您的peer不会被其他peer访问. 如果内容是 '*' 则标示您的peer永远被允许访问.
"Save"=="保存"
#-----------------------------
#File: ConfigParser.html
#---------------------------
Parser Configuration==解析配置
Content Parser Settings==内容解析设置
With this settings you can activate or deactivate parsing of additional content-types based on their MIME-types.==此设置能开启/关闭依据文件类型(MIME)的内容解析.
For a detailed description of the various MIME-types take a look at==关于MIME的详细描述请参考
http://www.iana.org/assignments/media-types/</a>==http://www.iana.org/assignments/media-types/</a>.
enable/disable Parser==Parser 开启 / 关闭
# --- Parser Names are hard-coded BEGIN ---
##Mime-Type==MIME Typ
##Microsoft Powerpoint Parser==Microsoft Powerpoint Parser
#Torrent Metadata Parser==Torrent Metadaten Parser
##HTML Parser==HTML Parser
#GNU Zip Compressed Archive Parser==GNU Zip Komprimiertes Archiv Parser
##Adobe Flash Parser==Adobe Flash Parser
#Word Document Parser==Word Dokument Parser
##vCard Parser==vCard Parser
#Bzip 2 UNIX Compressed File Parser==bzip2 UNIX Komprimierte Datei Parser
#OASIS OpenDocument V2 Text Document Parser==OASIS OpenDocument V2 Text Dokument Parser
##Microsoft Excel Parser==Microsoft Excel Parser
#ZIP File Parser==ZIP Datei Parser
##Rich Site Summary/Atom Feed Parser==Rich Site Summary / Atom Feed Parser
#Comma Separated Value Parser==Comma Separated Value (CSV) Parser
##Microsoft Visio Parser==Microsoft Visio Parser
#Tape Archive File Parser==Bandlaufwerk Archiv Datei Parser
#7zip Archive Parser==7zip Archiv Parser
##Acrobat Portable Document Parser==Adobe Acrobat Portables Dokument Format Parser
##Rich Text Format Parser==Rich Text Format Parser
#Generic Image Parser==Generischer Bild Parser
#PostScript Document Parser==PostScript Dokument Parser
#Open Office XML Document Parser==Open Office XML Dokument Parser
#BMP Image Parser==BMP Bild Parser
# --- Parser Names are hard-coded END ---
"Submit"=="提交"
#-----------------------------
#File: ConfigPortal.html
#---------------------------
Integration of a Search Portal==搜索门户设置
If you like to integrate YaCy as portal for your web pages, you may want to change icons and messages on the search page.==如果您想将YaCy作为您的网站搜索门户, 您可能需要在这改变搜索页面的图标和信息.
The search page may be customized.==搜索页面可以自由定制.
You can change the \'corporate identity\'-images, the greeting line==您可以改变 'Corporate Identity' 图像, 问候语
and a link to a home page that is reached when the \'corporate identity\'-images are clicked.==和一个指向首页的 'Corporate Identity' 图像链接.
To change also colours and styles use the <a href=\"ConfigAppearance_p.html\">Appearance Servlet</a> for different skins and languages.==
若要改变颜色和风格,请到<a href="ConfigAppearance_p.html">外观选项</a>选择您喜欢的皮肤和语言.
Greeting Line<==问候语<
URL of Home Page<==首页链接<
URL of a Small Corporate Image<==小图位置<
URL of a Large Corporate Image<==大图位置<
Show Navigation Bar on Search Page?==显示导航栏和搜索页?
Show Navigation Top-Menu&nbsp;==显示顶级导航菜单&nbsp;
no link to YaCy Menu \(admin must navigate to /Status.html manually\)==没有到YaCy菜单的链接(管理页面必须指向/Status.html)
Show Advanced Search Options on Search Page?==在搜索页显示高级搜索选项?
Show Advanced Search Options on index.html&nbsp;==在index.html显示高级搜索选项?
do not show Advanced Search==不显示高级搜索
Show Information Links for each Search Result Entry==显示搜索结果的链接信息
>Date&==>日期&
>Size&==>大小&
>Metadata&==>元数据&
>Parser&==>Parser&
>Pictures==>图像
Default Pop-Up Page<==默认弹出页面<
>Status Page==>状态页面
>Search Front Page==>搜索首页
>Search Page \(small header\)==>搜索页面(二级标题)
>Interactive Search Page==>交互搜索页面
Default index.html Page \(by forwarder\)==默认index.html(前者指定)
Target for Click on Search Results==点击搜索结果时
\"_blank\" \(new window\)=="_blank" (新窗口)
\"_self\" \(same window\)=="_self" (同一窗口)
\"_parent\" \(the parent frame of a frameset\)=="_parent" (父级窗口)
\"_top\" \(top of all frames\)=="_top" (置顶)
\"searchresult\" \(a default custom page name for search results\)=="搜索结果" (搜索结果页面名称)
"Change Search Page"=="改变搜索页"
"Set to Default Values"=="设为默认值"
The search page can be integrated in your own web pages with an iframe. Simply use the following code:==使用以下代码, 将搜索页能集成在网页框架中:
This would look like:==示例:
For a search page with a small header, use this code:==对于一个拥有二级标题的页面, 可使用以下代码:
A third option is the interactive search. Use this code:==交互搜索代码:
#-----------------------------
#File: ConfigProfile_p.html
#---------------------------
Your Personal Profile==您的个人资料
You can create a personal profile here, which can be seen by other YaCy-members==您可以在这创建个人资料, 而且对其他YaCy成员可见
or <a href="ViewProfile.html\?hash=localhash">in the public</a> using a <a href="ViewProfile.rdf\?hash=localhash">FOAF RDF file</a>.==或者<a href="ViewProfile.html?hash=localhash">在公共场所时</a>使用<a href="ViewProfile.rdf?hash=localhash">FOAF RDF 文件</a>.
#Name==Name
#Nick Name==Nick Name
Homepage \(appears on every <a href="Supporter.html">Supporter Page</a> as long as your peer is online\)==首页(显示在每个<a href="Supporter.html">支持者</a> 页面中, 前提是您的peer在线).
#eMail==eMail
#ICQ==ICQ
#Jabber==Jabber
#Yahoo!==Yahoo!
#MSN==MSN
#Skype==Skype
Comment==注释
"Save"=="保存"
You can use <==在这里您可以用<
> here.==>.
#-----------------------------
#File: ConfigProperties_p.html
#---------------------------
Advanced Config==高级设置
Here are all configuration options from YaCy.==这里显示YaCy所有设置.
You can change anything, but some options need a restart, and some options can crash YaCy, if wrong values are used.==您可以改变任何这里的设置, 当然, 有的需要重启才能生效, 有的甚至能引起YaCy崩溃.
For explanation please look into defaults/yacy.init==详细内容请参考defaults/yacy.init
"Save"=="保存"
#-----------------------------
#File: ConfigRobotsTxt_p.html
#---------------------------
Exclude Web-Spiders==排除Web-Spider
Here you can set up a robots.txt for all webcrawlers that try to access the webinterface of your peer.==在这里您可以创建一个robots.txt, 以阻止试图访问您peer网络接口的网络爬虫.
is a volunteer agreement most search-engines \(including YaCy\) follow.==是一个大多数搜索引擎(包括YaCy)都遵守的协议.
It disallows crawlers to access webpages or even entire domains.==它会阻止网络爬虫(crawlers)进入网页甚至是整个域.
Deny access to==禁止访问以下页面
Entire Peer==整个peer
Status page==状态页面
Network pages==网络页面
Surftips==建议
News pages==新闻
Blog==博客
Wiki=维基
Public bookmarks==公共书签
Home Page==首页
File Share==共享文件
"Save restrictions"=="保存"
#-----------------------------
#File: ConfigSearchBox.html
#---------------------------
Integration of a Search Box==搜索框设置
We give information how to integrate a search box on any web page that==如何将一个搜索框集成到任意
calls the normal YaCy search window.==调用YaCy搜索的页面.
Simply use the following code:==使用以下代码:
MySearch== 我的搜索
"Search"=="搜索"
This would look like:==示例:
This does not use a style sheet file to make the integration into another web page with a different style sheet easier.==在这里并没有使用样式文件, 因为这样会比较容易将其嵌入到不同样式的页面里.
You would need to change the following items:==您可能需要以下条目:
Replace the given colors \#eeeeee \(box background\) and \#cccccc \(box border\)==替换已给颜色 #eeeeee (框架背景) 和 #cccccc (框架边框)
Replace the word \"MySearch\" with your own message==用您想显示的信息替换"我的搜索"
#-----------------------------
#File: ConfigUpdate_p.html
#---------------------------
Manual System Update==系统升级
Current installed Release==当前版本
Available Releases==可用版本
"Download Release"=="下载更新"
"Check for new Release"=="检查更新"
Downloaded Releases==已下载
No downloaded releases available for deployment.==无可用更新.
no&nbsp;automated installation on development environments==开发环境中自动安装
"Install Release"=="安装更新"
"Delete Release"=="删除更新"
Automatic Update==自动更新
check for new releases, download if available and restart with downloaded release==检查更新, 如果可用则重启并使用
"Check \+ Download \+ Install Release Now"=="检查 + 下载 + 现在安装"
Download of release \#\[downloadedRelease\]\# finished. Restart Initiated.== 已完成下载 #[downloadedRelease]# . 重启并初始化.
No more recent release found.==无最近更新.
Release will be installed. Please wait.==准备安装更新. 请稍等.
You installed YaCy with a package manager.==您使用包管理器安装的YaCy.
To update YaCy, use the package manager:==用包管理器以升级YaCy:
Omitting update because this is a development environment.==因当前为开发环境, 忽略安装升级.
Omitting update because download of release \#\[downloadedRelease\]\# failed.==下载 #[downloadedRelease]# 失败, 忽略安装升级.
Automated System Update==系统自动升级
manual update==手动升级
no automatic look-up, updates can be made manually using this interface \(see options above\)==无自动检查更新时, 可以使用此功能安装更新(参见上述).
automatic update==自动更新
updates are made within fixed cycles:==每隔一定时间自动检查更新:
Time between lookup==检查周期
hours==小时
Release blacklist==版本黑名单
regex on release number strings==版本号正则表达式
Release type==版本类型
only main releases==仅主版本号
any release including developer releases==任何版本, 包括测试版
Signed autoupdate:==签名升级:
only accept signed files==仅接受签名文件
"Submit"=="提交"
Accepted Changes.==已接受改变.
System Update Statistics==系统升级状况
Last System Lookup==上一次查找更新
never==从未
Last Release Download==最近一次下载更新
Last Deploy==最近一次应用更新
#-----------------------------
#File: Connections_p.html
#---------------------------
Connection Tracking==连接跟踪
Incoming Connections==进入连接
Showing \#\[numActiveRunning\]\# active, \#\[numActivePending\]\# pending connections from a max. of \#\[numMax\]\# allowed incoming connections.==显示 #[numActiveRunning]# 活动, #[numActivePending]# 挂起连接, 最大允许 #[numMax]# 个进入连接.
Protocol</td>==协议</td>
Duration==持续时间
Source IP\[:Port\]==来源IP[:端口]
Dest. IP\[:Port\]==目标IP[:端口]
Command</td>==命令</td>
Used==使用的
Close==关闭
Waiting for new request nr.==等待新请求数.
Outgoing Connections==外出连接
Showing \#\[clientActive\]\# pooled outgoing connections used as:==显示 #[clientActive]# 个外出链接, 用作:
Duration==持续时间
#ID==ID
#-----------------------------
#File: CookieMonitorIncoming_p.html
#---------------------------
Incoming Cookies Monitor==进入Cookies监视
Cookie Monitor: Incoming Cookies==Cookie监视: 进入Cookies
This is a list of Cookies that a web server has sent to clients of the YaCy Proxy:==Web服务器已向YaCy代理客户端发送的Cookies:
Showing \#\[num\]\# entries from a total of \#\[total\]\# Cookies.==显示 #[num]# 个条目, 总共 #[total]# 个Cookies.
Sending Host==发送主机
Date</td>==日期</td>
Receiving Client==接收主机
#Cookie==Cookie
"Enable Cookie Monitoring"=="开启Cookie监视"
"Disable Cookie Monitoring"=="关闭Cookie监视"
#-----------------------------
#File: CookieMonitorOutgoing_p.html
#---------------------------
Outgoing Cookies Monitor==外出Cookie监视
Cookie Monitor: Outgoing Cookies==Cookie监视: 外出Cookie
This is a list of cookies that browsers using the YaCy proxy sent to webservers:==YaCy代理以通过浏览器向Web服务器发送的Cookie:
Showing \#\[num\]\# entries from a total of \#\[total\]\# Cookies.==显示 #[num]# 个条目, 总共 #[total]# 个Cookies.
Receiving Host==接收主机
Date</td>==日期</td>
Sending Client==发送主机
#Cookie==Cookie
"Enable Cookie Monitoring"=="开启Cookie监视"
"Disable Cookie Monitoring"=="关闭Cookie监视"
#-----------------------------
#File: CrawlProfileEditor_p.html
#---------------------------
>Crawl Profile Editor<==>Crawl文件编辑<
>Crawler Steering<==>Crawler向导<
>Crawl Scheduler<==>定期Crawl<
>Scheduled Crawls can be modified in this table<==>请在下表中修改已安排的crawl<
Crawl profiles hold information about a crawl process that is currently ongoing.==Crawl文件里保存有正在运行的crawl进程信息.
#Crawl profiles hold information about a specific URL which is internally used to perform the crawl it belongs to.==Crawl Profile enthalten Informationen über eine spezifische URL, welche intern genutzt wird, um nachzuvollziehen, wozu der Crawl gehört.
#The profiles for remote crawls, <a href="ProxyIndexingMonitor_p.html">indexing via proxy</a> and snippet fetches==Die Profile für Remote Crawl, <a href="ProxyIndexingMonitor_p.html">Indexierung per Proxy</a> und Snippet Abrufe
#cannot be altered here as they are hard-coded.==können nicht verändert werden, weil sie "hard-coded" sind.
Crawl Profile List==Crawl文件列表
Crawl Thread==Crawl线程
#Status==Status
#Start URL==Start URL
>Depth</strong>==>深度</strong>
Must Match==必须匹配
Must Not Match==必须不符
MaxAge</strong>==最长寿命</strong>
#Auto Filter Depth</strong>==Auto Filter Tiefe</strong>
#Auto Filter Content</strong>==Auto Inhalts Filter</strong>
Max Page Per Domain</strong>==每个域中拥有最大页面</strong>
Accept==接受
Fill Proxy Cache==填充代理缓存
Local Text Indexing==本地文本索引
Local Media Indexing==本地媒体索引
Remote Indexing==远程索引
#Status / Action==Status / Aktion
#terminated::active==beendet::aktiv
no::yes==否::是
Running==运行中
"Terminate"=="终结"
Finished==已完成
"Delete"=="删除"
"Delete finished crawls"=="删除已完成的crawl进程"
Select the profile to edit==选择要修改的文件
"Edit profile"=="修改文件"
An error occurred during editing the crawl profile:==修改crawl文件时发生错误:
Edit Profile==修改文件
"Submit changes"=="提交改变"
#-----------------------------
#File: CrawlResults.html
#---------------------------
Crawl Results<==Crawl结果<
Overview</a>==概况</a>
Receipts</a>==回执</a>
Queries</a>=请求</a>
DHT Transfer==DHT转移
Proxy Use==Proxy使用
Local Crawling</a>==本地crawl</a>
Global Crawling</a>==全球crawl</a>
Surrogate Import</a>==导入备份</a>
>Crawl Results Overview<==>Crawl结果一览<
These are monitoring pages for the different indexing queues.==索引队列监视页面.
YaCy knows 5 different ways to acquire web indexes. The details of these processes \(1-5\) are described within the submenu's listed==YaCy使用5种不同的方式来获取网络索引. 进程(1-5)的细节在子菜单中显示
above which also will show you a table with indexing results so far. The information in these tables is considered as private,==以上列表也会显示目前的索引结果. 表中的信息应该视为隐私,
so you need to log-in with your administration password.==所以您最好设置一个有密码的管理员账户来查看.
Case \(6\) is a monitor of the local receipt-generator, the opposed case of \(1\). It contains also an indexing result monitor but is not considered private==事件(6)与事件(1)相反, 显示本地回执. 它也包含索引结果, 但不属于隐私
since it shows crawl requests from other peers.==因为它含有来自其他peer的请求.
Case \(7\) occurs if surrogate files are imported==如果备份被导入, 则事件(7)发生.
The image above illustrates the data flow initiated by web index acquisition.==上图为网页索引的数据流.
Some processes occur double to document the complex index migration structure.==一些进程可能出现双重文件索引结构混合的情况.
\(1\) Results of Remote Crawl Receipts==(1) 远程crawl回执结果
This is the list of web pages that this peer initiated to crawl,==这是peer初始化时crawl的网页列表,
but had been crawled by <em>other</em> peers.==但是先前它们已被<em>其他</em>peer crawl.
This is the 'mirror'-case of process \(6\).==这是进程(6)的'镜像'实例
<em>Use Case:</em> You get entries here, if you start a local crawl on the 'Index Creation'-Page and check the==<em>用法:</em> 您可以在这获得细目, 如果'索引创建'页面中选中了
'Do Remote Indexing'-flag. Every page that a remote peer indexes upon this peer's request=='远程索引'. 每一个远端peer索引页面所依据的peer请求
is reported back and can be monitored here.==都在这里显示.
\(2\) Results for Result of Search Queries==(2) 搜索查询结果报告页
This index transfer was initiated by your peer by doing a search query.==通过搜索, 此索引转移能被初始化.
The index was crawled and contributed by other peers.==这个索引是被其他peer贡献与crawl的.
<em>Use Case:</em> This list fills up if you do a search query on the 'Search Page'==<em>用法:</em>当您在'搜索页面'进行搜索时, 此表会被填充.
\(3\) Results for Index Transfer==(3) 索引转移结果
The url fetch was initiated and executed by other peers.==被其他peer初始化并抓取的URL.
These links here have been transmitted to you because your peer is the most appropriate for storage according to==这些链接已经被传递给你, 因为根据全球分布哈希表的计算,
the logic of the Global Distributed Hash Table.==您的peer是最适合存储它们的.
<em>Use Case:</em> This list may fill if you check the 'Index Receive'-flag on the 'Index Control' page==<em>用法:</em>当您选中了在'索引控制'里的'接收索引'时, 这个表会被填充.
\(4\) Results for Proxy Indexing==(4) 代理索引结果
These web pages had been indexed as result of your proxy usage.==以下是由于使用代理而索引的网页.
No personal or protected page is indexed==不包括私有或受保护网页
such pages are detected by Cookie-Use or POST-Parameters \(either in URL or as HTTP protocol\)==通过检测cookie用途和提交参数(链接或者HTTP协议)能够识别出此类网页,
and automatically excluded from indexing.==并在索引时自动排除.
<em>Use Case:</em> You must use YaCy as proxy to fill up this table.==<em>用法:</em>必须把YaCy用作代理才能填充此表格.
Set the proxy settings of your browser to the same port as given==将浏览器代理端口设置为
on the 'Settings'-page in the 'Proxy and Administration Port' field.=='设置'页面'代理和管理端口'选项中的端口.
\(5\) Results for Local Crawling==(5) 本地crawl结果
These web pages had been crawled by your own crawl task.==您的crawl任务crawl了这些网页.
<em>Use Case:</em> start a crawl by setting a crawl start point on the 'Index Create' page.==<em>用法:</em>在'索引创建'页面设置crawl起始点以开始crawl.
\(6\) Results for Global Crawling==(6) 全球crawl结果
These pages had been indexed by your peer, but the crawl was initiated by a remote peer.==这些网页已经被您的peer索引, 但是它们是被远端peer crawl的.
This is the 'mirror'-case of process \(1\).==这是进程(1)的'镜像'实例.
<em>Use Case:</em> This list may fill if you check the 'Accept remote crawling requests'-flag on the 'Index Crate' page==<em>用法:</em>如果您选中了'索引创建'页面的'接受远端crawl请求', 则会在此列表中显示.
The stack is empty.==栈为空.
Statistics about \#\[domains\]\# domains in this stack:==此栈显示有关 #[domains]# 域的数据:
\(7\) Results from surrogates import==\(7\) 备份导入结果
These records had been imported from surrogate files in DATA/SURROGATES/in==这些记录从 DATA/SURROGATES/in 中的备份文件中导入
<em>Use Case:</em> place files with dublin core metadata content into DATA/SURROGATES/in or use an index import method==<em>用法:</em>将包含Dublin核心元数据的文件放在 DATA/SURROGATES/in 中, 或者使用索引导入方式
\(i.e. <a href="IndexImportMediawiki_p.html">MediaWiki import</a>, <a href="IndexImportOAIPMH_p.html">OAI-PMH retrieval</a>\)==(例如 <a href="IndexImportMediawiki_p.html">MediaWiki 导入</a>, <a href="IndexImportOAIPMH_p.html">OAI-PMH 导入</a>\)
#Domain==Domain
#URLs=URLs
"delete all"=="全部删除"
Showing all \#\[all\]\# entries in this stack.==显示栈中所有 #[all]# 条目.
Showing latest \#\[count\]\# lines from a stack of \#\[all\]\# entries.==显示栈中 #[all]# 条目的最近 #[count]# 行.
"clear list"=="清除列表"
#Initiator==Initiator
>Executor==>执行
>Modified==>已改变
>Words==>单词
>Title==>标题
#URL==URL
"delete"=="删除"
#-----------------------------
#File: CrawlStartExpert_p.html
#---------------------------
Expert Crawl Start==Crawl高级设置
Start Crawling Job:==开始Crawl任务:
You can define URLs as start points for Web page crawling and start crawling here. \"Crawling\" means that YaCy will download the given website, extract all links in it and then download the content behind these links. This is repeated as long as specified under \"Crawling Depth\".==您可以将指定URL作为网页crawling的起始点. "Crawling"意即YaCy会下载指定的网站, 并解析出网站中链接的所有内容, 其深度由"Crawling深度"指定.
Attribute<==属性<
Value<==值<
Description<==描述<
>Starting Point:==>起始点:
>From URL==>来自URL
From Sitemap==来自站点地图
From File==来自文件
Existing start URLs are always re-crawled.==已存在的起始链接将会被重新crawl.
Other already visited URLs are sorted out as \"double\", if they are not allowed using the re-crawl option.==对于已经访问过的链接, 如果它们不允许被重新crawl则被标记为'重复'.
Create Bookmark==创建书签
\(works with "Starting Point: From URL" only\)==(仅从"起始链接"开始)
Title<==标题<
Folder<==目录<
This option lets you create a bookmark from your crawl start URL.==此选项会将起始链接设为书签.
Crawling Depth</label>==Crawling深度</label>
This defines how often the Crawler will follow links \(of links..\) embedded in websites.==此选项为crawler跟踪网站嵌入链接的深度.
0 means that only the page you enter under \"Starting Point\" will be added==设置为 0 代表仅将"起始点"
to the index. 2-4 is good for normal indexing. Values over 8 are not useful, since a depth-8 crawl will==添加到索引. 建议设置为2-4. 由于设置为8会索引将近25,000,000,000个页面, 所以不建议设置大于8的值,
index approximately 25.600.000.000 pages, maybe this is the whole WWW.==这可能是整个互联网的内容.
Scheduled re-crawl<==已安排的重新Crawl<
>no&nbsp;doubles<==>无&nbsp;重复<
run this crawl once and never load any page that is already known, only the start-url may be loaded again.==仅运行一次crawl, 并且不载入重复网页, 可能会重载起始链接.
>re-load<==>重载<
run this crawl once, but treat urls that are known since==运行此crawl, 但是将链接视为从
>years<==>年<
>months<==>月<
>days<==>日<
>hours<==>时<
not as double and load them again. No scheduled re-crawl.==不重复并重新载入. 无安排的crawl任务.
>scheduled<==>定期<
after starting this crawl, repeat the crawl every==运行此crawl后, 每隔
> automatically.==> 运行.
A web crawl performs a double-check on all links found in the internet against the internal database. If the same url is found again,==网页crawl参照自身数据库, 对所有找到的链接进行重复性检查. 如果链接重复,
then the url is treated as double when you check the \'no doubles\' option. A url may be loaded again when it has reached a specific age,==并且'无重复'选项打开, 则被以重复链接对待. 如果链接存在时间超过一定时间,
to use that check the \'re-load\' option. When you want that this web crawl is repeated automatically, then check the \'scheduled\' option.==并且'重载'选项打开, 则此链接会被重新读取. 当您想这些crawl自动运行时, 请选中'定期'选项.
In this case the crawl is repeated after the given time and no url from the previous crawl is omitted as double.==此种情况下, crawl会每隔一定时间自动运行并且不会重复寻找前一次crawl中的链接.
Must-Match Filter==必须与过滤器匹配
Use filter==使用过滤器
Restrict to start domain==限制为起始域
Restrict to sub-path==限制为子路经
#The filter is an emacs-like regular expression that must match with the URLs which are used to be crawled;==Dieser Filter ist ein emacs-ähnlicher regulärer Ausdruck, der mit den zu crawlenden URLs übereinstimmen muss;
The filter is a <a href=\"http://java.sun.com/j2se/1.5.0/docs/api/java/util/regex/Pattern.html\">regular expression</a>==过滤是一组<a href="http://java.sun.com/j2se/1.5.0/docs/api/java/util/regex/Pattern.html">正则表达式</a>
that must match with the URLs which are used to be crawled; default is \'catch all\'.==, 它们表示了要抓取的链接规则; 默认是'抓取所有'.
Example: to allow only urls that contain the word \'science\', set the filter to \'.*science.*\'.==比如: 如果仅抓取包含'科学'的链接, 可将过滤器设置为 '.*.*'.
You can also use an automatic domain-restriction to fully crawl a single domain.==您也可以使用域限制来抓取整个域.
Must-Not-Match Filter==必须与过滤器不匹配
This filter must not match to allow that the page is accepted for crawling.==此过滤器表示了所有不被抓取的网页规则.
The empty string is a never-match filter which should do well for most cases.==对于大多数情况可以留空.
If you don't know what this means, please leave this field empty.==如果您不知道这些设置的意义, 请将此留空.
#Re-crawl known URLs:==Re-crawl bekannter URLs:
Use</label>:==使用</label>:
#It depends on the age of the last crawl if this is done or not: if the last crawl is older than the given==Es hängt vom Alter des letzten Crawls ab, ob dies getan oder nicht getan wird: wenn der letzte Crawl älter als das angegebene
#Auto-Dom-Filter:==Auto-Dom-Filter:
#This option will automatically create a domain-filter which limits the crawl on domains the crawler==Diese Option erzeugt automatisch einen Domain-Filter der den Crawl auf die Domains beschränkt ,
#will find on the given depth. You can use this option i.e. to crawl a page with bookmarks while==die auf der angegebenen Tiefe gefunden werden. Diese Option kann man beispielsweise benutzen, um eine Seite mit Bookmarks zu crawlen
#restricting the crawl on only those domains that appear on the bookmark-page. The adequate depth==und dann den folgenden Crawl automatisch auf die Domains zu beschränken, die in der Bookmarkliste vorkamen. Die einzustellende Tiefe für
#for this example would be 1.==dieses Beispiel wäre 1.
#The default value 0 gives no restrictions.==Der Vorgabewert 0 bedeutet, dass nichts eingeschränkt wird.
Maximum Pages per Domain:==每个域允许的最多页面:
Page-Count==页面计数
You can limit the maximum number of pages that are fetched and indexed from a single domain with this option.==您可以将从单个域中抓取和索引的页面数目限制为此值.
You can combine this limitation with the 'Auto-Dom-Filter', so that the limit is applied to all the domains within==您可以将此设置与'Auto-Dom-Filter'结合起来, 以限制给定深度中所有域.
the given depth. Domains outside the given depth are then sorted-out anyway.==超出深度范围的域会被自动忽略.
Accept URLs with==接受链接
dynamic URLs==动态URL
A questionmark is usually a hint for a dynamic page. URLs pointing to dynamic content should usually not be crawled. However, there are sometimes web pages with static content that==动态页面通常用问号标记. 通常不会抓取指向动态页面的链接. 然而, 也有些含有静态内容的页面用问号标记.
is accessed with URLs containing question marks. If you are unsure, do not check this to avoid crawl loops.==如果您不确定, 不要选中此项, 以防抓取时陷入死循环.
Store to Web Cache==存储到网页缓存
This option is used by default for proxy prefetch, but is not needed for explicit crawling.==这个选项默认打开, 并用于预抓取, 但对于精确抓取此选项无效.
Policy for usage of Web Cache==网页缓存使用策略
The caching policy states when to use the cache during crawling:==缓存策略即表示抓取时何时使用缓存:
#no&nbsp;cache==no&nbsp;cache
no&nbsp;cache==无&nbsp;缓存
#if&nbsp;fresh==if&nbsp;fresh
if&nbsp;fresh==如果&nbsp;有更新&nbsp;缓存&nbsp;命中
#if&nbsp;exist==if&nbsp;exist
if&nbsp;exist==如果&nbsp;缓存&nbsp;命中
#cache&nbsp;only==cache&nbsp;only
cache&nbsp;only==仅&nbsp;缓存
never use the cache, all content from fresh internet source;==从不使用缓存内容, 全部从因特网资源即时抓取;
use the cache if the cache exists and is fresh using the proxy-fresh rules;==如果缓存中存在并且是最新则使用代理刷新规则;
use the cache if the cache exist. Do no check freshness. Otherwise use online source;==如果缓存存在则使用缓存. 不检查是否最新. 否则使用最新源;
never go online, use all content from cache. If no cache exist, treat content as unavailable==从不检查线上内容, 全部使用缓存内容. 如果缓存存在, 将其视为无效
Do Local Indexing:==本地索引:
index text==索引文本
index media==索引媒体
This enables indexing of the wepages the crawler will download. This should be switched on by default, unless you want to crawl only to fill the==此选项开启时, crawler会下载网页索引. 默认打开, 除非您仅要填充
Document Cache without indexing.==文件缓存而不进行索引.
Do Remote Indexing==远程索引
Describe your intention to start this global crawl \(optional\)==在这填入您要进行全球crawl的目的(可选)
This message will appear in the 'Other Peer Crawl Start' table of other peers.==此消息会显示在其他peer的'其他peer crawl起始'列表中.
If checked, the crawler will contact other peers and use them as remote indexers for your crawl.==如果选中, crawler会联系其他peer, 并将其作为此次crawl的远程索引器.
If you need your crawling results locally, you should switch this off.==如果您仅想crawl本地内容, 请关闭此设置.
Only senior and principal peers can initiate or receive remote crawls.==仅高级peer和主peer能初始化或者接收远程crawl.
A YaCyNews message will be created to inform all peers about a global crawl==YaCy新闻消息中会通知其他peer这个全球crawl,
so they can omit starting a crawl with the same start point.==然后他们才能以相同起始点进行crawl.
Exclude <em>static</em> Stop-Words==排除<em>静态</em>非索引字
This can be useful to circumvent that extremely common words are added to the database, i.e. \"the\", \"he\", \"she\", \"it\"... To exclude all words given in the file <tt>yacy.stopwords</tt> from indexing,==此项用于规避极常用字, 比如 "个", "他", "她", "它"等. 当要在索引时排除所有在<tt>yacy.stopwords</tt>文件中的字词时,
check this box.==请选中此项.
"Start New Crawl"=="开始新crawl"
#-----------------------------
#File: CrawlStartIntranet_p.html
#---------------------------
#Intranet Crawl Start==Intranet Crawl Start
When an index domain is configured to contain intranet links,==当索引域中包含局域网链接时,
the intranet may be scanned for available servers.==可用服务器会扫描它们.
Please select below the servers in your intranet that you want to fetch into the search index.==以下服务器在您的局域网中, 请选择您想添加到搜索索引中的主机.
This network definition does not allow intranet links.==当前网络定义不允许局域网链接.
A list of intranet servers is only available if you confiugure YaCy to index intranet targets.==仅当您将YaCy配置为索引局域网目标, 以下条目才有效.
To do so, open the <a href=\"ConfigBasic.html\">Basic Configuration</a> servlet and select the \'Intranet Indexing\' use case.==将YaCy配置为索引局域网目标, 打开<a href="ConfigBasic.html">基本设置</a>页面, 选中'索引局域网'.
Available Intranet Server==可用局域网服务器
#>IP<==>IP<
#>URL<==>URL<
>Process<==>状态<
>not in index<==>不在索引中<
>indexed<==>已加入索引<
"Add Selected Servers to Crawler"=="添加选中服务器到crawler"
#-----------------------------
#File: CrawlStartScanner_p.html
#---------------------------
Network Scanner==网络扫描器
YaCy can scan a network segment for available http, ftp and smb server.==YaCy可扫描http, ftp 和smb服务器.
You must first select a IP range and then, after this range is scanned,==须先指定IP范围, 再进行扫描,
it is possible to select servers that had been found for a full-site crawl.==才有可能选择主机并将其作为全站crawl的服务器.
No servers had been detected in the given IP range \#\[iprange\]\#.
Please enter a different IP range for another scan.==未检测到可用服务器, 请重新指定IP范围.
Please wait...==请稍候...
>Scan the network<==>扫描网络<
Scan Range==扫描范围
Scan sub-range with given host==扫描给定主机的子域
Full Intranet Scan:==局域网完全扫描:
Do not use intranet scan results, you are not in an intranet environment!==由于您当前不处于局域网环境, 请不要使用局域网扫描结果!
>Scan Cache<==>扫描缓存<
accumulate scan results with access type \"granted\" into scan cache \(do not delete old scan result\)==使用"已授权"的缓存以加速扫描(不要删除上次扫描结果)
>Service Type<==>服务类型<
#>ftp==>FTP
#>smb==>SMB
#>http==>HTTP
#>https==>HTTPS
>Scheduler<==>定期扫描<
run only a scan==运行一次扫描
scan and add all sites with granted access automatically. This disables the scan cache accumulation.==扫描并自动添加已授权站点. 此选项会关闭缓存扫描加速.
Look every==每隔
>minutes<==>分<
>hours<==>时<
>days<==>天<
again and add new sites automatically to indexer.==再次检视, 并自动添加新站点到索引器中.
Sites that do not appear during a scheduled scan period will be excluded from search results.==周期扫描中未上线的站点会被自动排除.
"Scan"=="扫描"
The following servers had been detected:==已检测到以下服务器:
Available server within the given IP range==指定IP范围内的可用服务器
>Protocol<==>协议<
#>IP<==>IP<
#>URL<==>URL<
>Access<==>权限<
>Process<==>状态<
>unknown<==>未知<
>empty<==>空<
>granted<==>已授权<
>denied<==>拒绝<
>not in index<==>未在索引中<
>indexed<==>已被索引<
"Add Selected Servers to Crawler"=="添加选中服务器到crawler"
#-----------------------------
#File: CrawlStartSite_p.html
#---------------------------
>Site Crawling<==>crawl站点<
Site Crawler:==站点crawler:
Download all web pages from a given domain or base URL.==下载给定域或者URL里的所有网页.
>Site Crawl Start<==>起始crawl站点<
>Site<==>站点<
Link-List of URL==URL链接表
>Scheduler<==>定时器<
run this crawl once==运行此crawl
scheduled, look every==每
>minutes<==>分钟<
>hours<==>小时<
>days<==>天<
for new documents automatically.==, 以自动查找新文件.
>Path<==>路径<
load all files in domain==载入域中所有文件
load only files in a sub-path of given url==仅载入给定URL子路径中文件
>Limitation<==>限制<
not more than <==不超过<
>documents<==>文件<
>Dynamic URLs<==>动态URL<
allow <==允许<
urls with a \'\?\' in the path==路径中含有'?'
#>Start<==>Start<
"Start New Crawl"=="开始新crawl"
Hints<==提示<
>Crawl Speed Limitation<==>crawl速度限制<
No more that two pages are loaded from the same host in one second \(not more that 120 document per minute\) to limit the load on the target server.==每秒最多从同一主机中载入两个页面(每分钟不超过120个文件)以限制目标主机负载.
>Target Balancer<==>目标平衡器<
A second crawl for a different host increases the throughput to a maximum of 240 documents per minute since the crawler balances the load over all hosts.==对于不同主机的二次crawl, 会上升到每分钟最多240个文件, 因为crawler会自动平衡所有主机的负载.
>High Speed Crawling<==>高速crawl<
A \'shallow crawl\' which is not limited to a single host \(or site\)==当目标主机很多时, 用于多个主机(或站点)的'浅crawl'方式,
can extend the pages per minute \(ppm\) rate to unlimited documents per minute when the number of target hosts is high.==会增加每秒页面数(ppm).
This can be done using the <a href=\"CrawlStartExpert_p.html\">Expert Crawl Start</a> servlet.==对应设置<a href="CrawlStartExpert_p.html">专家模式起始crawl</a>选项.
>Scheduler Steering<==>定时器向导<
The scheduler on crawls can be changed or removed using the <a href=\"Table_API_p.html\">API Steering</a>.==可以使用<a href="Table_API_p.html">API向导</a>改变或删除crawl定时器.
#-----------------------------
#File: Help.html
#---------------------------
YaCy: Help==YaCy: 帮助
Tutorial==新手教程
You are using the administration interface of your own search engine==您正在搜索引擎的管理界面
You can create your own search index with YaCy==您可以用YaCy创建属于自己的搜索索引
To learn how to do that, watch one of the demonstration videos below==观看以下demo视频以了解更多
#-----------------------------
#File: index.html
#---------------------------
YaCy \'\#\[clientname\]\#\': Search Page==YaCy '#[clientname]#': 搜索页面
#kiosk mode==Kiosk Modus
"Search"=="搜索"
#Text==Text
Images==图像
#Audio==Audio
Video==视频
Applications==应用程序
more options...==更多设置...
advanced parameters==高级参数
Max. number of results==搜索结果最多有
Results per page==每个页面显示结果
Resource==资源
global==全球
>local==>本地
Global search is disabled because==全球搜索被禁用, 因为
DHT Distribution</a> is==DHT分发</a>被
Index Receive</a> is==索引接收</a>被
DHT Distribution and Index Receive</a> are==DHT分发和索引接受</a>被
disabled.\#\(==禁用.#(
URL mask==URL过滤
restrict on==限制
show all==显示所有
#überarbeiten!!!
Prefer mask==首选过滤
Constraints==约束
only index pages==仅索引页面
"authentication required"=="需要认证"
Disable search function for users without authorization==禁止未授权用户搜索
Enable web search to everyone==允许所有人搜索
the peer-to-peer network==P2P网络
only the local index==仅本地索引
Query Operators==查询操作
restrictions==限制
only urls with the &lt;phrase&gt; in the url==仅包含&lt;phrase&gt;的URL
only urls with extension==仅带扩展名的URL
only urls from host==仅来自主机的URL
only pages with as-author-anotated==仅作者授权页面
only pages from top-level-domains==仅来自顶级域名的页面
only resources from http or https servers==仅来自http/https服务器的资源
only resources from ftp servers==仅来自ftp服务器的资源
they are rare==很少
crawl them yourself==您需要crawl它们
only resources from smb servers==仅来自smb服务器的资源
Intranet Indexing</a> must be selected==局域网索引</a>必须被选中
only files from a local file system==仅来自本机文件系统的文件
ranking modifier==排名修改
sort by date==按日期排序
latest first==最新者居首
multiple words shall appear near==引用多个字
doublequotes==双引号
prefer given language==首选语言
an ISO639 2-letter code==ISO639标准的双字母代码
heuristics==启发式
add search results from scroogle==添加来自scroogle的搜索结果
add search results from blekko==添加来自blekko的搜索结果
Search Navigation==搜索导航
keyboard shotcuts==快捷键
tab or page-up==Tab或者Page Up
next result page==下一页
page-down==Page Down
previous result page==上一页
automatic result retrieval==自动结果检索
browser integration==浏览集成
after searching, click-open on the default search engine in the upper right search field of your browser and select 'Add "YaCy Search.."'==搜索后, 点击浏览器右上方区域中的默认搜索引擎, 并选择'添加"YaCy"'
search as rss feed==作为RSS-Feed搜索
click on the red icon in the upper right after a search. this works good in combination with the '/date' ranking modifier. See an==搜索后点击右上方的红色图标. 配合'/date'排名修改, 能取得较好效果.
>example==>例
json search results==JSON搜索结果
for ajax developers: get the search rss feed and replace the '.rss' extension in the search result url with '.json'==对AJAX开发者: 获取搜索结果页的RSS-Feed, 并用'.json'替换'.rss'搜索结果链接中的扩展名
#-----------------------------
#File: IndexCleaner_p.html
#---------------------------
Index Cleaner==索引整理
>URL-DB-Cleaner==>URL-DB-清理
#ThreadAlive:
#ThreadToString:
Total URLs searched:==搜索到的全部URL:
Blacklisted URLs found:==搜索到的黑名单URL:
Percentage blacklisted:==黑名单占百分比:
last searched URL:==最近搜索到的URL:
last blacklisted URL found:==最近搜索到的黑名单URL:
>RWI-DB-Cleaner==>RWI-DB-清理
RWIs at Start:==启动时RWIs:
RWIs now:==当前RWIs:
wordHash in Progress:==处理中的Hash值:
last wordHash with deleted URLs:==已删除URL的Hash值:
Number of deleted URLs in on this Hash:==此Hash中已删除的URL数:
URL-DB-Cleaner - Clean up the database by deletion of blacklisted urls:==URL-DB-清理 - 清理数据库, 会删除黑名单URl:
Start/Resume==开始/继续
Stop==停止
Pause==暂停
RWI-DB-Cleaner - Clean up the database by deletion of words with reference to blacklisted urls:==RWI-DB-清理 - 清理数据库, 会删除与黑名单URL相关的信息:
#-----------------------------
#File: IndexControlRWIs_p.html
#---------------------------
Reverse Word Index Administration==详细索引字管理
The local index currently contains \#\[wcount\]\# reverse word indexes==本地索引包含 #[wcount]# 个索引字
RWI Retrieval \(= search for a single word\)==RWI接收(= 搜索单个单词)
Select Segment:==选择片段:
Retrieve by Word:<==输入单词:<
"Show URL Entries for Word"=="显示关键字相关的URL"
Retrieve by Word-Hash==输入单词Hash值
"Show URL Entries for Word-Hash"=="显示关键字Hash值相关的URL"
"Generate List"=="生成列表"
Cleanup==清理
>Index Deletion<==>删除索引<
>Delete Search Index<==>删除搜索索引<
Stop Crawler and delete Crawl Queues==停止crawl并删除crawl队列
Delete HTTP &amp; FTP Cache==删除HTTP &amp; FTP缓存
Delete robots.txt Cache==删除robots.txt缓存
Delete cached snippet-fetching failures during search==删除已缓存的错误信息
"Delete"=="删除"
No entry for word \'\#\[word\]\#\'==无'#[word]#'的对应条目
No entry for word hash==无条目对应
Search result==搜索结果
total URLs</td>==全部URL</td>
appearance in</td>==出现在</td>
in link type</td>==链接类型</td>
document type</td>==文件类型</td>
<td>description</td>==<td>描述</td>
<td>title</td>==<td>标题</td>
<td>creator</td>==<td>创建者</td>
<td>subject</td>==<td>主题</td>
<td>url</td>==<td>URL</td>
<td>emphasized</td>==<td>高亮</td>
<td>image</td>==<td>图像</td>
<td>audio</td>==<td>音频</td>
<td>video</td>==<td>视频</td>
<td>app</td>==<td>应用</td>
index of</td>==索引</td>
>Selection</td>==>选择</td>
Display URL List==显示URL列表
Number of lines==行数
all lines==全部
"List Selected URLs"=="列出选中URL"
Transfer RWI to other Peer==传递RWI给其他peer
Transfer by Word-Hash==按字Hash值传递
"Transfer to other peer"=="传递"
to Peer==指定peer
<dd>select==<dd>选择
or enter a hash==或者输入peer的Hash值
Sequential List of Word-Hashes==字Hash值的顺序列表
No URL entries related to this word hash==无对应入口URL对于字Hash
\#\[count\]\# URL entries related to this word hash==#[count]# 个入口URL与此字Hash相关
Resource</td>==资源</td>
Negative Ranking Factors==负向排名因素
Positive Ranking Factors==正向排名因素
Reverse Normalized Weighted Ranking Sum==反向常规加权排名和
hash</td>==Hash</td>
dom length</td>==域长度</td>
ybr</td>==YBR</td>
#url comps</td>
url length</td>==URL长度</td>
pos in text</td>==文中位置</td>
pos of phrase</td>==短语位置</td>
pos in phrase</td>==在短语中位置</td>
word distance</td>==字间距离</td>
<td>authority</td>==<td>权限</td>
<td>date</td>==<td>日期</td>
words in title</td>==标题字数</td>
words in text</td>==内容字数</td>
local links</td>==本地链接</td>
remote links</td>==远程链接</td>
hitcount</td>==命中数</td>
#props</td>==</td>
unresolved URL Hash==未解析URL Hash值
Word Deletion==删除关键字
Deletion of selected URLs==删除选中URL
delete also the referenced URL \(recommended, may produce unresolved references==同时删除关联URL (推荐, 虽然在索引时
at other word indexes but they do not harm\)==会产生未解析关联, 但是不影响系统性能)
for every resolvable and deleted URL reference, delete the same reference at every other word where==对于已解析并已删除的URL关联来说, 则会删除它与其他关键字的关联
the reference exists \(very extensive, but prevents further unresolved references\)==(很多, 但是会阻止未解析关联的产生)
"Delete reference to selected URLs"=="删除与选中URL的关联"
"Delete Word"=="删除关键字"
Blacklist Extension==黑名单扩展
"Add selected URLs to blacklist"=="添加选中URL到黑名单"
"Add selected domains to blacklist"=="添加选中域到黑名单"
#-----------------------------
#File: IndexControlURLs_p.html
#---------------------------
URL References Administration==URL关联管理
The local index currently contains \#\[ucount\]\# URL references==目前本地索引含有 #[ucount]# 个URL关联
URL Retrieval==URL获取
Select Segment:==选择片段:
Retrieve by URL:<==输入URL:<
"Show Details for URL"=="显示细节"
Retrieve by URL-Hash==输入URL Hash值
"Show Details for URL-Hash"=="显示细节"
"Generate List"=="生成列表"
Statistics about top-domains in URL Database==URL数据库中顶级域数据
Show top==显示全部URL中的
domains from all URLs.==个域.
"Generate Statistics"=="生成数据"
Statistics about the top-\#\[domains\]\# domains in the database:==数据库中头 #[domains]# 个域的数据:
"delete all"=="全部删除"
#Domain==Domain
#URLs==URLs
Sequential List of URL-Hashes==URL Hash顺序列表
Loaded URL Export==导出已加载URL
Export File==导出文件
#URL Filter==URL Filter
#Export Format==Export Format
#Only Domain <i>\(superfast\)==Nur Domains <i>(sehr schnell)
Only Domain:==仅域名:
Full URL List:==完整URL列表:
Plain Text List \(domains only\)==文本文件(仅域名)
HTML \(domains as URLs, no title\)==HTML (超链接格式的域名, 不包括标题)
#Full URL List <i>\(high IO\)==Vollständige URL Liste <i>(hoher IO)
Plain Text List \(URLs only\)==文本文件(仅URL)
HTML \(URLs with title\)==HTML (带标题的URL)
#XML (RSS)==XML (RSS)
"Export URLs"=="导出URL"
Export to file \#\[exportfile\]\# is running .. \#\[urlcount\]\# URLs so far==正在导出到 #[exportfile]# .. 已经导出 #[urlcount]# 个URL
Finished export of \#\[urlcount\]\# URLs to file==已完成导出 #[urlcount]# 个URL到文件
Export to file \#\[exportfile\]\# failed:==导出到文件 #[exportfile]# 失败:
No entry found for URL-hash==未找到合适条目对应URL-Hash
#URL String</td>==URL Adresse</td>
#Hash</td>==Hash</td>
#Description</td>==Beschreibung</td>
#Modified-Date</td>==Änderungsdatum</td>
#Loaded-Date</td>==Ladedatum</td>
#Referrer</td>==Referrer</td>
#Doctype</td>==Dokumententyp</td>
#Language</td>==Sprache</td>
#Size</td>==Größe</td>
#Words</td>==Wörter</td>
"Show Content"=="显示内容"
"Delete URL"=="删除URL"
this may produce unresolved references at other word indexes but they do not harm==这可能和其他关键字产生未解析关联, 但是这并不影响系统性能
"Delete URL and remove all references from words"=="删除URl并从关键字中删除所有关联"
delete the reference to this url at every other word where the reference exists \(very extensive, but prevents unresolved references\)==删除指向此链接的关联字,(很多, 但是会阻止未解析关联的产生)
#-----------------------------
#File: IndexCreateLoaderQueue_p.html
#---------------------------
Loader Queue==加载器
The loader set is empty==无加载器
There are \#\[num\]\# entries in the loader set:==加载器中有 #[num]# 个条目:
Initiator==发起者
Depth==深度
#URL==URL
#-----------------------------
#File: IndexCreateParserErrors_p.html
#---------------------------
Parser Errors==解析错误
Rejected URL List:==拒绝URL列表:
There are \#\[num\]\# entries in the rejected-urls list.==在拒绝URL列表中有 #[num]# 个条目.
Showing latest \#\[num\]\# entries.==显示最近的 #[num]# 个条目.
"show more"=="更多"
"clear list"=="清除列表"
There are \#\[num\]\# entries in the rejected-queue:==拒绝队列中有 #[num]# 个条目:
#Initiator==Initiator
Executor==执行器
#URL==URL
Fail-Reason==错误原因
#-----------------------------
#File: ContentIntegrationPHPBB3_p.html
#---------------------------
Content Integration: Retrieval from phpBB3 Databases==内容集成: 从phpBB3数据库中导入
It is possible to extract texts directly from mySQL and postgreSQL databases.==能直接从mysql或者postgresql中解压出内容.
Each extraction is specific to the data that is hosted in the database.==每次解压都针对主机数据库中的数据.
This interface gives you access to the phpBB3 forums software content.==通过此接口能访问phpBB3论坛软件内容.
If you read from an imported database, here are some hints to get around problems when importing dumps in phpMyAdmin:==如果从使用phpMyAdmin读取数据库内容, 您可能会用到以下建议:
before importing large database dumps, set==在导入尺寸较大的数据库时,
in phpmyadmin/config.inc.php and place your dump file in /tmp \(Otherwise it is not possible to upload files larger than 2MB\)==设置phpmyadmin/config.inc.php的内容, 并将您的数据库文件放到 /tmp 目录下(否则不能上传大于2MB的文件)
deselect the partial import flag==取消部分导入
When an export is started, surrogate files are generated into DATA/SURROGATE/in which are automatically fetched by an indexer thread.==导出过程开始时, 在 DATA/SURROGATE/in 目录下自动生成备份文件, 并且会被索引器自动抓取.
All indexed surrogate files are then moved to DATA/SURROGATE/out and can be re-cycled when an index is deleted.==所有被索引的备份文件都在 DATA/SURROGATE/out 目录下, 并被索引器循环利用.
The URL stub==URL根域名
like http://forum.yacy-websuche.de==比如链接 http://forum.yacy-websuche.de
this must be the path right in front of '\/viewtopic.php\?'==必须在'\/viewtopic.php\?'前面
Type==数据库
> of database<==> 类型<
use either 'mysql' or 'pgsql'==使用'mysql'或者'pgsql'
Host=数据库
> of the database<==> 主机名<
of database service==数据库服务
usually 3306 for mySQL==MySQL中通常是3306
Name of the database==主机
on the host==数据库
Table prefix string==table
for table names==前缀
User==数据库
that can access the database==用户名
Password==给定用户名的
for the account of that user given above==访问密码
Posts per file==导出备份中
in exported surrogates==每个文件拥有的最多帖子数
Check database connection==检查数据库连接
Export Content to Surrogates==导出到备份
Import a database dump==导入数据库
Import Dump==导入
Posts in database==数据库中帖子
first entry==第一个
last entry==最后一个
Info failed:==错误信息:
Export successful! Wrote \#\[files\]\# files in DATA/SURROGATES/in==导出成功! #[files]# 已写入到 DATA/SURROGATES/in 目录
Export failed:==导出失败:
Import successful!==导入成功!
Import failed:==导入失败:
#-----------------------------
#File: DictionaryLoader_p.html
#---------------------------
Dictionary Loader==功能扩展
YaCy can use external libraries to enable or enhance some functions. These libraries are not==您可以使用外部插件来增强一些功能. 考虑到程序大小问题,
included in the main release of YaCy because they would increase the application file too much.==这些插件并未被包含在主程序中.
You can download additional files here.==您可以在这下载扩展文件.
>Geolocalization<==>位置定位<
Geolocalization will enable YaCy to present locations from OpenStreetMap according to given search words.==根据关键字, YaCy能从OpenStreetMap获得的位置信息.
>GeoNames<==>位置<
With this file it is possible to find cities with a population > 1000 all over the world.==使用此文件能够找到全世界平均人口大于1000的城市.
>Download from<==>下载来源<
>Storage location<==>存储位置<
#>Status<==>Status<
>not loaded<==>未加载<
>loaded<==>已加载<
:deactivated==:已停用
>Action<==>动作<
>Result<==>结果<
"Load"=="加载"
"Deactivate"=="停用"
"Remove"=="卸载"
"Activate"=="启用"
>loaded and activated dictionary file<==>加载并启用插件<
>loading of dictionary file failed: \#\[error\]\#<==>读取插件失败: #[error]#<
>deactivated and removed dictionary file<==>停用并卸载插件<
>cannot remove dictionary file: \#\[error\]\#<==>卸载插件失败: #[error]#<
>deactivated dictionary file<==>停用插件<
>cannot deactivate dictionary file: \#\[error\]\#<==>停用插件失败: #[error]#<
>activated dictionary file<==>已启用插件<
>cannot activate dictionary file: \#\[error\]\#<==>启用插件失败: #[error]#<
#>OpenGeoDB<==>OpenGeoDB<
>With this file it is possible to find locations in Germany using the location \(city\) name, a zip code, a car sign or a telephone pre-dial number.<==>使用此插件, 则能通过查询城市名, 邮编, 车牌号或者电话区号得到德国任何地点的位置信息.<
#-----------------------------
#File: IndexCreateWWWGlobalQueue_p.html
#---------------------------
Global Crawl Queue==全球crawl队列
This queue stores the urls that shall be sent to other peers to perform a remote crawl.==此队列存储着需要发送到其他peer进行crawl的链接.
If there is no peer for remote crawling available, the links are crawled locally.==如果远端无可用crawl, 则此队列对本地有效.
The global crawler queue is empty==全球crawl队列为空.
"clear global crawl queue"=="清空全球crawl队列"
There are <strong>\#\[num\]\#</strong> entries in the global crawler queue. Showing <strong>\#\[show-num\]\#</strong> most recent entries.==全球crawler队列中有 <strong>#[num]#</strong> 个条目. 显示最近的 <strong>#[show-num]#</strong> 个.
Show last==显示最近
</a> entries.==</a> 个.
Initiator==发起者
Profile==资料
Depth==深度
Modified Date==修改日期
Anchor Name==祖先名
#URL==URL
#-----------------------------
#File: IndexCreateWWWLocalQueue_p.html
#---------------------------
Local Crawl Queue==本地crawl队列
This queue stores the urls that shall be crawled localy by this peer.==此队列存储着本地peer要crawl的队列.
It may also contain urls that are computed by the proxy-prefetch.==此队列中也包含通过代理预取的链接.
The local crawler queue is empty==本地crawl队列为空.
There are <strong>\#\[num\]\#</strong> entries in the local crawler queue. Showing <strong>\#\[show-num\]\#</strong> most recent entries.==本地crawl队列中有 <strong>#[num]#</strong> 个条目. 显示最近的 <strong>#[show-num]#</strong> 个.
Show last==显示最近
</a> entries.==</a> 个.
Initiator==发起者
Profile==资料
Depth==深度
Modified Date==修改日期
Anchor Name==祖先名
URL==URL
\[Delete\]==[删除]
Delete Entries:==已删除条目:
"Delete"=="删除"
This may take a quite long time.==这会花费很长一段时间.
#-----------------------------
#File: IndexCreateWWWRemoteQueue_p.html
#---------------------------
Remote Crawl Queue==远端Crawl队列
This queue stores the urls that other peers sent to you in order to perform a remote crawl for them.==此队列存储着其他peer发送给您从而为他们进行crawl的链接.
The remote crawler queue is empty==远端crawl队列为空
"clear remote crawl queue"=="清空远端crawl队列"
There are <strong>\#\[num\]\#</strong> entries in the remote crawler queue.==远端crawl队列中有 <strong>#[num]#</strong> 个条目.
Showing <strong>\#\[show-num\]\#</strong> most recent entries.==显示最近的 <strong>#[show-num]#</strong> 个.
Show last==显示最近
</a> entries.==</a> 个.
Initiator==发起者
Profile==资料
Depth==深度
Modified Date==修改日期
Anchor Name==祖先名
URL==URL
Delete==删除
#-----------------------------
#File: IndexImport_p.html
#---------------------------
YaCy \'\#\[clientname\]\#\': Index Import==YaCy '#[clientname]#': Index Import
#Crawling Queue Import==Crawling Puffer Import
Index DB Import==导入索引数据
The local index currently consists of \(at least\) \#\[wcount\]\# reverse word indexes and \#\[ucount\]\# URL references.==本地索引当前至少有 #[wcount]# 个关键字索引和 #[ucount]# 个URL关联.
Import Job with the same path already started.==含有相同路径的导入任务已存在.
Starting new Job==开始新任务
Import&nbsp;Type:==导入类型:
Cache Size==缓存大小
Usage Examples==使用<br />举例
"Path to the PLASMADB directory of the foreign peer"=="其他peer的PLASMADB目录路径"
Import&nbsp;Path:==导入失败:
"Start Import"=="开始导入"
Attention:==注意:
Always do a backup of your source and destination database before starting to use this import function.==在使用此导入功能之前, 一定要备份您的源数据库和目的数据库.
Currently running jobs==当前运行任务
Job Type==任务类型
>Path==>路径
Status==Status
Elapsed<br />Time==已用<br />时间
Time<br />Left==剩余<br />时间
Abort Import==停止
Pause Import==暂停
Finished::Running::Paused==已完成::正在运行::已暂停
"Abort"=="停止"
#"Pause"=="Pause"
"Continue"=="继续"
Finished jobs==已完成任务
"Clear List"=="清空列表"
Last Refresh:==最近刷新:
Example Path:==示例路径:
Requirements:==要求:
You need to have at least the following directories and files in this path:==此路经中至少包含以下目录和文件:
>Type==>类型
>Writeable==>可写
>Description==>描述
>File==>文件
>Directory==>目录
>Yes<==>是<
>No<==>否<
The LoadedURL Database containing all loaded and indexed URLs==已加载URL数据库中含有所有已加载并被索引的URL
The assortment directory containing parts of the word index.==分类目录中含有部分关键字索引.
The words directory containing parts of the word index.==关键字目录中含有部分关键字索引.
The assortment file that should be imported.==需要导入分类文件.
The assortment file must have the postfix==分类文件一定要有后缀名
.db".==.db".
If you would like to import an assortment file from the <tt>PLASMADB\\ACLUSTER\\ABKP</tt>==如果您想从 <tt>PLASMADB\\ACLUSTER\\ABKP</tt> 中导入分类文件,
you have to rename it first.==则须先重命名.
>Notes:==>注意:
Please note that the imported words are useless if the destination peer doesn't know==如果目的peer不知道导入的关键字属于那些链接,
the URLs the imported words belongs to.==则导入的关键字无效.
Crawling Queue Import:==导入crawl队列:
Contains data about the crawljob an URL belongs to==含有crawl任务链接的数据
The crawling queue==crawl队列
Various stack files that belong to the crawling queue==属于crawl队列的各种栈文件
#-----------------------------
#File: IndexImportMediawiki_p.html
#---------------------------
#MediaWiki Dump Import==MediaWiki Dump Import
No import thread is running, you can start a new thread here==当前无运行导入任务, 不过您可以在这开始
Bad input data:==损坏数据:
MediaWiki Dump File Selection: select a \'bz2\' file==MediaWiki 备份文件: 选择一个 'bz2' 文件
You can import MediaWiki dumps here. An example is the file==您可以在这导入MediaWiki副本. 示例
Dumps must be in XML format and may be compressed in gz or bz2. Place the file in the YaCy folder or in one of its sub-folders.==副本文件必须是 XML 格式并用 bz2 压缩的. 将其放进YaCy目录或其子目录中.
"Import MediaWiki Dump"=="导入MediaWiki备份"
When the import is started, the following happens:==:开始导入时, 会进行以下工作
The dump is extracted on the fly and wiki entries are translated into Dublin Core data format. The output looks like this:==备份文件即时被解压, 并被译为Dublin核心元数据格式:
Each 10000 wiki records are combined in one output file which is written to /DATA/SURROGATES/in into a temporary file.==每个输出文件都含有10000个wiki记录, 并都被保存在 /DATA/SURROGATES/in 的临时目录中.
When each of the generated output file is finished, it is renamed to a .xml file==生成的输出文件都以 .xml结尾
Each time a xml surrogate file appears in /DATA/SURROGATES/in, the YaCy indexer fetches the file and indexes the record entries.==只要 /DATA/SURROGATES/in 中含有 xml文件, YaCy索引器就会读取它们并为其中的条目制作索引.
When a surrogate file is finished with indexing, it is moved to /DATA/SURROGATES/out==当索引完成时, xml文件会被移动到 /DATA/SURROGATES/out
You can recycle processed surrogate files by moving them from /DATA/SURROGATES/out to /DATA/SURROGATES/in==您可以将文件从/DATA/SURROGATES/out 移动到 /DATA/SURROGATES/in 以重复索引.
Import Process==导入进程
#Thread:==Thread:
#Dump:==Dump:
Processed:==已完成:
Wiki Entries==Wiki条目
Speed:==速度:
articles per second<==个文章每秒<
Running Time:==运行时间:
hours,==小时,
minutes<==分<
Remaining Time:==剩余时间:
#hours,==Stunden,
#minutes<==Minuten<
#-----------------------------
#File: IndexImportOAIPMH_p.html
#---------------------------
#OAI-PMH Import==OAI-PMH Import
Results from the import can be monitored in the <a href=\"CrawlResults.html\?process=7\">indexing results for surrogates==导入结果<a href="CrawlResults.html?process=7">监视
Single request import==单个导入请求
This will submit only a single request as given here to a OAI-PMH server and imports records into the index==向OAI-PMH服务器提交如下导入请求, 并将返回记录导入索引
"Import OAI-PMH source"=="导入OAI-PMH源"
Source:==源:
Processed:==已处理:
records<==返回记录<
#ResumptionToken:==ResumptionToken:
Import failed:==导入失败:
Import all Records from a server==从服务器导入全部记录
Import all records that follow according to resumption elements into index==根据恢复元素导入服务器记录
"import this source"=="导入此源"
::or&nbsp;==::o或&nbsp;
"import from a list"=="从列表导入"
Import started!==已开始导入!
Bad input data:==损坏数据:
#-----------------------------
#File: IndexImportOAIPMHList_p.html
#---------------------------
List of \#\[num\]\# OAI-PMH Servers==#[num]# 个OAI-PMH服务器
"Load Selected Sources"=="加载选中源"
OAI-PMH source import list==导入OAI-PMH源
#OAI Source List==OAI Quellen Liste
>Source<==>源<
Import List==导入列表
#>Thread<==>Thread<
#>Source<==>Quelle<
>Processed<br />Chunks<==>已处理<br />块<
>Imported<br />Records<==>已导入<br />记录<
>Speed<br />\(records/second\)==>速度<br />==(记录/每秒)
#-----------------------------
#File: Load_MediawikiWiki.html
#---------------------------
YaCy \'\#\[clientname\]\#\': Configuration of a Wiki Search==YaCy '#[clientname]#': Wiki搜索配置
#Integration in MediaWiki==Integration in MediaWiki
It is possible to insert wiki pages into the YaCy index using a web crawl on that pages.==使用网页crawl, 能将wiki网页添加到YaCy主页中.
This guide helps you to crawl your wiki and to insert a search window in your wiki pages.==此向导帮助您crawl您的wiki网页, 在其中添加一个搜索框.
Retrieval of Wiki Pages==接收Wiki网页
The following form is a simplified crawl start that uses the proper values for a wiki crawl.==下栏是使用某一值的Wiki crawl起始点.
Just insert the front page URL of your wiki.==请填入Wiki的URL.
After you started the crawl you may want to get back==crawl开始后,
to this page to read the integration hints below.==您可能需要返回此页面阅读以下提示.
URL of the wiki main page==Wiki主页URL
This is a crawl start point==将作为crawl起始点
"Get content of Wiki: crawl wiki pages"=="获取Wiki内容: crawl Wiki页面"
Inserting a Search Window to MediaWiki==在MediaWiki中添加搜索框
To integrate a search window into a MediaWiki, you must insert some code into the wiki template.==在wiki模板中添加以下代码以将搜索框集成到MediaWiki中.
There are several templates that can be used for MediaWiki, but in this guide we consider that==MediaWiki中有多种模板,
you are using the default template, \'MonoBook.php\':==在此我们使用默认模板 'MonoBook.php':
open skins/MonoBook.php==打开 skins/MonoBook.php
find the line where the default search window is displayed, there are the following statements:==找到搜索框显示部分代码, 如下:
Remove that code or set it in comments using \'&lt;!--\' and \'--&gt;\'==删除以上代码或者用 '&lt;!--' '--&gt;' 将其注释掉
Insert the following code:==插入以下代码:
Search with YaCy in this Wiki:==在此Wiki中使用YaCy搜索:
value=\"Search\"==value="搜索"
Check all appearances of static IPs given in the code snippet and replace it with your own IP, or your host name==用您自己的IP或者主机名替代代码中给出的IP地址
You may want to change the default text elements in the code snippet==您可以更改代码中的文本元素
To see all options for the search widget, look at the more generic description of search widgets at==搜索框详细设置, 请参见
the <a href=\"ConfigLiveSearch.html\">configuration for live search</a>.==<a href=\"ConfigLiveSearch.html\">搜索栏集成: 即时搜索</a>.
#-----------------------------
#File: Load_PHPBB3.html
#---------------------------
Configuration of a phpBB3 Search==phpBB3搜索配置
#Integration in phpBB3==Integration in phpBB3
It is possible to insert forum pages into the YaCy index using a database import of forum postings.==导入含有论坛帖子的数据库, 能在YaCy主页显示论坛内容.
This guide helps you to insert a search window in your phpBB3 pages.==此向导能帮助您在您的phpBB3论坛页面中添加搜索框.
Retrieval of phpBB3 Forum Pages using a database export==phpBB3论坛页面需使用数据库导出
Forum posting contain rich information about the topic, the time, the subject and the author.==论坛帖子中含有话题, 时间, 主题和作者等丰富信息.
This information is in an bad annotated form in web pages delivered by the forum software.==此类信息往往由论坛散播, 并且对于搜索引擎来说, 它们的标注很费解.
It is much better to retrieve the forum postings directly from the database.==所以, 直接从数据库中获取帖子内容效果更好.
This will cause that YaCy is able to offer nice navigation features after searches.==这会使得YaCy在每次搜索后提供较好引导特性.
YaCy has a phpBB3 extraction feature, please go to the <a href="ContentIntegrationPHPBB3_p.html">phpBB3 content integration</a> servlet for direct database imports.==YaCy能够解析phpBB3关键字, 参见 <a href="ContentIntegrationPHPBB3_p.html">phpBB3内容集成</a> 直接导入数据库方法.
Retrieval of phpBB3 Forum Pages using a web crawl==使用网页crawl接收phpBB3论坛页面
The following form is a simplified crawl start that uses the proper values for a phpbb3 forum crawl.==下栏是使用某一值的phpBB3论坛crawl起始点.
Just insert the front page URL of your forum. After you started the crawl you may want to get back==将论坛首页填入表格. 开始crawl后,
to this page to read the integration hints below.==您可能需要返回此页面阅读以下提示.
URL of the phpBB3 forum main page==phpBB3论坛主页
This is a crawl start point==这是crawl起始点
"Get content of phpBB3: crawl forum pages"=="获取phpBB3内容: crawl论坛页面"
Inserting a Search Window to phpBB3==在phpBB3中添加搜索框
To integrate a search window into phpBB3, you must insert some code into a forum template.==在论坛模板中添加以下代码以将搜索框集成到phpBB3中.
There are several templates that can be used for phpBB3, but in this guide we consider that==phpBB3中有多种模板,
you are using the default template, \'prosilver\'==在此我们使用默认模板 'prosilver'.
open styles/prosilver/template/overall_header.html==打开 styles/prosilver/template/overall_header.html
find the line where the default search window is displayed, thats right behind the <pre>\&lt;div id=\"search-box\"\&gt;</pre> statement==找到搜索框显示代码部分, 它们在 <pre>&lt;div id="search-box"&gt;</pre> 下面
Insert the following code right behind the div tag==在div标签后插入以下代码
YaCy Forum Search==YaCy论坛搜索
;YaCy Search==;YaCy搜索
Check all appearances of static IPs given in the code snippet and replace it with your own IP, or your host name==用您自己的IP或者主机名替代代码中给出的IP地址
You may want to change the default text elements in the code snippet==您可以更改代码中的文本元素
To see all options for the search widget, look at the more generic description of search widgets at==搜索框详细设置, 请参见
the <a href=\"ConfigLiveSearch.html\">configuration for live search</a>.==der Seite <a href=\"ConfigLiveSearch.html\">搜索栏集成: 即时搜索</a>.
#-----------------------------
#File: Load_RSS_p.html
#---------------------------
Configuration of a RSS Search==RSS搜索配置
Loading of RSS Feeds<==正在读取RSS feed<
RSS feeds can be loaded into the YaCy search index.==YaCy能够读取RSS feed.
This does not load the rss file as such into the index but all the messages inside the RSS feeds as individual documents.==但不是直接读取RSS文件, 而是将RSS feed中的所有信息分别当作单独的文件来读取.
URL of the RSS feed==RSS feed链接
>Preview<==>预览<
"Show RSS Items"=="显示RSS条目"
Available after successful loading of rss feed in preview==仅在读取rss feed后有效
"Add All Items to Index \(full content of url\)"=="添加所有条目到索引中(URL中的全部内容)"
>once<==>立即<
>load this feed once now<==>立即读取此feed<
>scheduled<==>定时<
>repeat the feed loading every<==>读取此feed每隔<
>minutes<==>分钟<
>hours<==>小时<
>days<==>天<
> automatically.==>.
>List of Scheduled RSS Feed Load Targets<==>定时任务列表<
>Title<==>标题<
#>URL/Referrer<==>URL/Referrer<
>Recording<==>正在记录<
>Last Load<==>上次读取<
>Next Load<==>将要读取<
>Last Count<==>目前计数<
>All Count<==>全部计数<
>Avg. Update/Day<==>每天平均更新次数<
"Remove Selected Feeds from Scheduler"=="删除选中feed"
"Remove All Feeds from Scheduler"=="删除所有feed"
>Available RSS Feed List<==>可用RSS feed列表<
"Remove Selected Feeds from Feed List"=="删除选中feed"
"Remove All Feeds from Feed List"=="删除所有feed"
"Add Selected Feeds to Scheduler"=="添加选中feed到定时任务"
>new<==>新<
>enqueued<==>已加入队列<
>indexed<==>已索引<
>RSS Feed of==>RSS Feed
>Author<==>作者<
>Description<==>描述<
>Language<==>语言<
>Date<==>日期<
>Time-to-live<==>TTL<
>Docs<==>文件<
>State<==><
#>URL<==>URL<
"Add Selected Items to Index \(full content of url\)"=="添加选中条目到索引(URL中全部内容)"
#-----------------------------
#File: Messages_p.html
#---------------------------
>Messages==>短消息
Date</td>==日期</td>
From</td>==来自</td>
To</td>==发送至</td>
>Subject==>主题
Action==动作
From:==来自:
To:==发送至:
Date:==日期:
#Subject:==Betreff:
>view==>查看
reply==回复
>delete==>删除
Compose Message==撰写短消息
Send message to peer==发送消息至peer
"Compose"=="撰写"
Message:==短消息:
inbox==收件箱
#-----------------------------
#File: MessageSend_p.html
#---------------------------
Send message==发送短消息
You cannot send a message to==不能发送消息至
The peer does not respond. It was now removed from the peer-list.==远端peer未响应, 将从peer列表中删除.
The peer <b>==peer <b>
is alive and responded:==可用:
You are allowed to send me a message==您现在可以给我发送消息
kb and an==kb和一个
attachment &le;==附件 &le;
Your Message==您的短消息
Subject:==主题:
Text:==内容:
"Enter"=="发送"
"Preview"=="预览"
You can use==您可以在这使用
Wiki Code</a> here.==Wiki Code </a>.
Preview message==预览消息
The message has not been sent yet!==短消息未发送!
The peer is alive but cannot respond. Sorry.==peer属于活动状态但是无响应.
Your message has been sent. The target peer responded:==您的短消息已发送. 接收peer返回:
The target peer is alive but did not receive your message. Sorry.==抱歉, 接收peer属于活动状态但是没有接收到您的消息.
Here is a copy of your message, so you can copy it to save it for further attempts:==这是您的消息副本, 可被保存已备用:
You cannot call this page directly. Instead, use a link on the <a href="Network.html">Network</a> page.==您不能直接使用此页面. 请使用 <a href="Network.html">网络</a> 页面的对应功能.
#-----------------------------
#File: Network.html
#---------------------------
YaCy Search Network==YaCy搜索网络
YaCy Network<==YaCy网络<
The information that is presented on this page can also be retrieved as XML==此页信息也可表示为XML
Click the API icon to see the XML.==点击API图标查看XML.
To see a list of all APIs, please visit the <a href="http://www.yacy-websuche.de/wiki/index.php/Dev:API">API wiki page</a>.==获取所有API, 请访问<a href="http://www.yacy-websuche.de/wiki/index.php/Dev:API">API Wiki</a>.
Network Overview==网络一览
Active&nbsp;Peers==活动peer
Passive&nbsp;Peers==被动peer
Potential&nbsp;Peers==潜在peer
Active Peers in \'\#\[networkName\]\#\' Network=='#[networkName]#'网络中的活动peer
Passive Peers in \'\#\[networkName\]\#\' Network=='#[networkName]#'网络中的被动peer
Potential Peers in \'\#\[networkName\]\#\' Network=='#[networkName]#'网络中的潜在peer
Manually contacting Peer==手动联系peer
no remote \#\[peertype\]\# peer for this list known==当前列表中无远端 #[peertype]# peer.
Showing \#\[num\]\# entries from a total of \#\[total\]\# peers.==显示全部 #[total]# 个peer中的 #[num]# 个.
send&nbsp;<strong>M</strong>essage/<br/>show&nbsp;<strong>P</strong>rofile/<br/>edit&nbsp;<strong>W</strong>iki/<br/>browse&nbsp;<strong>B</strong>log==发送消息(<strong>m</strong>)/<br/>显示资料(<strong>p</strong>)/<br/>编辑wiki(<strong>w</strong>)/<br/>浏览博客(<strong>b</strong>)
Search for a peername \(RegExp allowed\)==搜索peer名称(允许正则表达式)
"Search"=="搜索"
Name==名称
Address==地址
Hash==Hash
Type==类型
Release/<br/>SVN==YaCy版本/<br/>SVN
Last<br/>Seen==最后<br/>上线
Location==位置
>URLs for<br/>Remote<br/>Crawl<==>用于<br/>远端<br/>crawl的URL<
Offset==偏移
Send message to peer==发送消息至peer
View profile of peer==查看peer资料
Read and edit wiki on peer==查看并编辑wiki
Browse blog of peer==查看博客
#"Ranking Receive: no"=="接收排名: 否"
#"no ranking receive"=="无接收排名"
#"Ranking Receive: yes"=="接收排名: 是"
#"Ranking receive enabled"=="打开排名接收"
"DHT Receive: yes"=="接收DHT: 是"
"DHT receive enabled"=="打开DHT接收"
"DHT Receive: no; \#\[peertags\]\#"=="接收DHT: 否; #[peertags]#"
"DHT Receive: no"=="接收DHT: 否"
#no tags given==keine Tags angegeben
"no DHT receive"=="无接收DHT"
"Accept Crawl: no"=="接受crawl: 否"
"no crawl"=="无crawl"
"Accept Crawl: yes"=="接受crawl: 是"
"crawl possible"=="可用crawl"
Contact: passive==通信: 被动
Contact: direct==通信: 直接
Seed download: possible==Seed下载: 可用
runtime:==运行时间:
#Peers==Peers
#YaCy Cluster==YaCy Cluster
>Network<==>网络<
#>Online Peers<==>Online Peers<
>Number of<br/>Documents<==>文件<br/>数目<
Indexing Speed:==索引速度:
Pages Per Minute \(PPM\)==页面每分钟(PPM)
Query Frequency:==请求频率:
Queries Per Hour \(QPH\)==请求每小时(QPH)
>Today<==>今天<
>Last&nbsp;Week<==>最近&nbsp;一周<
>Last&nbsp;Month<==>最近&nbsp;一月<
>Now<==>现在<
>Active<==>活动<
>Passive<==>被动<
>Potential<==>潜在<
>This Peer<==>本机peer<
URLs for<br/>Remote Crawl==用于远端<br/>crawl的URL
"The YaCy Network"=="YaCy网络"
Indexing<br/>PPM==索引<br/>PPM
\(public&nbsp;local\)==公共&nbsp;本地
\(remote\)==(远程)
Your Peer:==您的peer:
#>Name<==>Name<
#>Info<==>Info<
#>Version<==>Version<
#>UTC<==>UTC<
>Uptime<==>开机时间<
#>Links<==>Links<
#>RWIs<==>RWIs<
Sent<br/>Words==已发送<br/>关键字
Sent<br/>URLs==已发送<br/>URL
Received<br/>Words==已接收<br/>关键字
Received<br/>URLs==已接收<br/>URL
Known<br/>Seeds==已知<br/>Seeds
Connects<br/>per hour==联系<br/>每小时
#Version==Version
#Own/Other==Eigene/Andere
>dark green font<==>深绿色字<
senior/principal peers==高级/主要 peer
>light green font<==>浅绿色字<
>passive peers<==>被动peer<
>pink font<==>粉色字<
junior peers==次级peer
red point==红点
this peer==本机peer
>grey waves<==>灰色波浪<
>crawling activity<==>crawl活动<
>green radiation<==>绿色辐射圆<
>strong query activity<==>强烈请求活动<
>red lines<==>红线<
>DHT-out<==>DHT输出<
>green lines<==>绿线<
>DHT-in<==>DHT输入<
#DHT-out==DHT-out
#You are in online mode, but probably no internet resource is available.==Sie befinden sich im Online-Modus, aber zur Zeit besteht keine Internetverbindung.You are in online mode, but probably no internet resource is available.
#Please check your internet connection.==Bitte überprüfen Sie Ihre Internetverbindung.
#You are not in online mode. To get online, press this button:==Sie sind nicht im Online-Modus. Um Online zu gehen, drücken Sie diesen Knopf:
#"go online"=="online gehen"
#-----------------------------
#File: News.html
#---------------------------
Overview==概述
Incoming&nbsp;News==已接收
Processed&nbsp;News==已处理
Outgoing&nbsp;News==已生成
Published&nbsp;News==已发布
This is the YaCyNews system \(currently under testing\).==这是YaCy新闻系统(测试中).
The news service is controlled by several entry points:==新闻服务由以下动作控制:
A crawl start with activated remote indexing will automatically create a news entry.==使用活动远程索引作为起始点的crawl会自动创建一个新闻条目.
Other peers may use this information to prevent double-crawls from the same start point.==其他的peer能利用此信息以防止相同起始点的二次crawl.
A table with recently started crawls is presented on the Index Create - page=="创建首页"-页面会显示最近启动的crawl.
A change in the personal profile will create a news entry. You can see recently made changes of==个人信息的改变会创建一个新闻条目, 可以在网络页面查看,
profile entries on the Network page, where that profile change is visualized with a '\*' beside the 'P' \(profile\) - selector.==以带有 '*' 的 'P' (资料)标记出.
More news services will follow.==接下来会有更多的新闻服务.
Above you can see four menues:==上面四个菜单选项分别为:
<strong>Incoming News \(\#\[insize\]\#\)</strong>: latest news that arrived your peer.==<strong>已接收新闻(#[insize]#)</strong>: 发送至您peer的新闻.
Only these news will be used to display specific news services as explained above.==这些消息含有上述的特定新闻服务.
You can process these news with a button on the page to remove their appearance from the IndexCreate and Network page==您可以使用'创建首页'和'网络'页面的设置隐藏它们.
<strong>Processed News \(\#\[prsize\]\#\)</strong>: this is simply an archive of incoming news that you removed by processing.==<strong>已处理新闻(#[prsize]#)</strong>: 此页面显示您已删除的新闻.
<strong>Outgoing News \(\#\[ousize\]\#\)</strong>: here your can see news entries that you have created. These news are currently broadcasted to other peers.==<strong>已生成新闻(#[ousize]#)</strong>: 此页面显示您的peer创建的新闻条目, 默认发布给其他peer.
you can stop the broadcast if you want.==您也可以选择停止发布.
<strong>Published News \(\#\[pusize\]\#\)</strong>: your news that have been broadcasted sufficiently or that you have removed from the broadcast list.==<strong>已发布新闻(#[pusize]#)</strong>: 显示已经完全发布出去的新闻或者已经从发布列表删除的新闻.
Originator==拥有者
Created==创建时间
Category==分类
Received==接收时间
Distributed==已发布
Attributes==属性
"\#\(page\)\#::Process Selected News::Delete Selected News::Abort Publication of Selected News::Delete Selected News\#\(/page\)\#"=="#(page)#::处理选中新闻::删除选中新闻::停止发布选中新闻::删除选中新闻#(/page)#"
"\#\(page\)\#::Process All News::Delete All News::Abort Publication of All News::Delete All News\#\(/page\)\#"=="#(page)#::处理所有新闻::删除所有新闻::停止发布所有新闻::删除所有新闻#(/page)#"
#-----------------------------
#File: Performance_p.html
#---------------------------
Performance Settings==性能设置
Memory Settings==内存设置
Memory reserved for JVM==JVM内存设置
"Set"=="设置"
Resource Observer==资源查看
DHT-Trigger==DHT-触发
not triggered:==未触发:
>triggered==>已触发
reset state==重置
HDD==硬盘
disable crawls below==停止crawl当低于
free space==空闲空间
disable DHT-in below==停止接收DHT当低于
RAM==内存
Accepted change. This will take effect after <strong>restart</strong> of YaCy==已接受改变. 在YaCy<strong>重启</strong>后生效
restart now</a>==立即重启</a>
Confirm Restart==确定重启
refresh graph==刷新图表
#show memory tables==Zeige Speicher-Tabellen
Use Default Profile:==使用默认配置:
and use==并使用
of the defined performance.==中的默认性能设置.
Save==保存
Changes take effect immediately==改变立即生效
YaCy Priority Settings==YaCy优先级设置
YaCy Process Priority==YaCy进程优先级
#Normal==Normal
Below normal==低于普通
Idle</option>==空闲</option>
"Set new Priority"=="置为新优先级"
Changes take effect after <strong>restart</strong> of YaCy==在YaCy<strong>重启</strong>后生效.
Online Caution Settings==在线警告设置
This is the time that the crawler idles when the proxy is accessed, or a local or remote search is done.==这是代理被访问或者搜索完成后的一段crawl空闲时间.
The delay is extended by this time each time the proxy is accessed afterwards.==在访问代理后, 会触发此延时,
This shall improve performance of the affected process \(proxy or search\).==从而提高相关进程(代理或者搜索)的性能.
\(current delta is==(当前设置为
seconds since last proxy/local-search/remote-search access.\)==秒.)
Online Caution Case==触发事件
indexer delay \(milliseconds\) after case occurency==事件触发后的索引延时(毫秒)
#Proxy:==Proxy:
Local Search:==本地搜索:
Remote Search:==远端搜索:
"Enter New Parameters"=="使用新参数"
#-----------------------------
#File: PerformanceMemory_p.html
#---------------------------
Performance Settings for Memory==内存性能设置
refresh graph==刷新图表
Memory Usage:==内存使用:
After Startup==启动后
After Initializations==初始化后
before GC==GC前
after GC==GC前
>Now==>现在
before <==未<
Description==描述
maximum memory that the JVM will attempt to use==JVM使用的最大内存
>Available<==>可用<
total available memory including free for the JVM within maximum==当前JVM可用剩余内存
>Total<==>全部<
total memory taken from the OS==操作系统分配内存
>Free<==>空闲<
free memory in the JVM within total amount==JVM空闲内存
>Used<==>已用<
used memory in the JVM within total amount==JVM已用内存
#EcoTable RAM Index:==EcoTabelle RAM Index:
Table RAM Index:==Table使用内存:
>Size==>大小
>Key==>关键字
>Value==>值
#FlexTable RAM Index:==FlexTabelle RAM Index:
Table</td>==Table</td>
Chunk Size<==块大小<
#Count</td>==Anzahl</td>
Used Memory<==已用内存<
#Node Caches:==Knoten Cache:
Object Index Caches:==Object索引缓存:
Needed Memory==所需内存大小
Object Read Caches==Object读缓存
>Read Hit Cache<==>命中缓存<
>Read Miss Cache<==>丢失缓存<
>Read Hit<==>读命中<
>Read Miss<==>读丢失<
Write Unique<==写入<
Write Double<==写回<
Deletes<==删除<
Flushes<==清理<
Total Mem==全部内存
MB \(hit\)==MB (命中)
MB \(miss\)==MB (丢失)
Stop Grow when less than \#\[objectCacheStopGrow\]\# MB available left==可用内存低于 #[objectCacheStopGrow]# MB时停止增长
Start Shrink when less than \#\[objectCacheStartShrink\]\# MB availabe left==可用内存低于 #[objectCacheStartShrink]# MB开始减少
Other Caching Structures:==其他缓存结构:
Type</td>==类型</td>
>Hit<==>命中<
>Miss<==>丢失<
Insert<==插入<
Delete<==删除<
#DNSCache</td>==DNSCache</td>
#DNSNoCache</td>==DNSNoCache</td>
#HashBlacklistedCache==HashBlacklistedCache
Search Event Cache<==搜索事件缓存<
#-----------------------------
#File: PerformanceQueues_p.html
#---------------------------
Performance Settings of Queues and Processes==队列和进程性能设置
Scheduled tasks overview and waiting time settings:==定时任务一览与等待时间设置:
Queue Size==队列.-<br />大小
>Total==>全部
#Block Time==
#Sleep Time==
#Exec Time==
<td>Idle==<td>空闲
>Busy==>忙碌
Short Mem<br />Cycles==小内存<br />周期
>per Cycle==>每周期
>per Busy-Cycle==>每次忙碌周期
>Memory Use==>内存<br />使用
>Delay between==>延时
>idle loops==>空闲循环
>busy loops==>忙碌循环
Minimum of<br />Required Memory==最小<br />需要内存
Full Description==完整描述
Submit New Delay Values==提交新延时值
Changes take effect immediately==改变立即生效
Cache Settings:==缓存设置:
#RAM Cache==RAM Cache
<td>Description==<td>描述
URLs in RAM buffer:==缓存中URL:
This is the size of the URL write buffer. Its purpose is to buffer incoming URLs==这是URL写缓冲的大小.作用是缓冲接收URL,
in case of search result transmission and during DHT transfer.==以利于结果转移和DHT传递.
Words in RAM cache:==缓存中关键字
This is the current size of the word caches.==这是当前关键字缓存的大小.
The indexing cache speeds up the indexing process, the DHT cache holds indexes temporary for approval.==此缓存能加速索引进程, 也能用于DHT.
The maximum of this caches can be set below.==此缓存最大值能从下面设置.
Maximum URLs currently assigned<br />to one cached word:==关键字拥有最大URL数:
This is the maximum size of URLs assigned to a single word cache entry.==这是单个关键字缓存条目所能分配的最多URL数目.
If this is a big number, it shows that the caching works efficiently.==如果此数值较大, 则表示缓存效率很高.
Maximum age of a word:==关键字最长寿命:
This is the maximum age of a word in an index in minutes.==这是索引内关键字所能存在的最长时间.
Minimum age of a word:==关键字最短寿命:
This is the minimum age of a word in an index in minutes.==这是索引内关键字所能存在的最短时间.
Maximum number of words in cache:==缓存中关键字最大数目:
This is is the number of word indexes that shall be held in the==这是索引时缓存中存在的最大关键字索引数目.
ram cache during indexing. When YaCy is shut down, this cache must be==当YaCy停止时,
flushed to disc; this may last some minutes.==它们会被冲刷到硬盘中, 可能会花费数分钟.
#Initial space of words in cache:==Anfangs Freiraum im Word Cache:
#This is is the init size of space for words in cache.==Dies ist die Anfangsgröße von Wörtern im Cache.
Enter New Cache Size==使用新缓存大小
Balancer Settings==平衡器设置
This is the time delta between accessing of the same domain during a crawl.==这是在crawl期间, 访问同一域名的间歇值.
The crawl balancer tries to avoid that domains are==crawl平衡器能够避免频繁地访问同一域名,
accessed too often, but if the balancer fails \(i.e. if there are only links left from the same domain\), then these minimum==如果平衡器失效\(比如相同域名下只剩链接了), 则此有此间歇
delta times are ensured.==提供访问保障.
#>Crawler Domain<==>Crawler Domain<
>Minimum Access Time Delta<==>最小访问间歇<
>local \(intranet\) crawls<==>本地(局域网)crawl<
>global \(internet\) crawls<==>全球(广域网)crawl<
"Enter New Parameters"=="使用新参数"
Thread Pool Settings:==线程池设置:
maximum Active==最大活动
current Active==当前活动
Enter new Threadpool Configuration==使用新配置
#-----------------------------
#File: PerformanceConcurrency_p.html
#---------------------------
Performance of Concurrent Processes==并行进程性能查看
serverProcessor Objects==处理器对象
#Thread==Thread
Queue Size<br />Current==当前队列<br />大小
Queue Size<br />Current==当前队列<br />大小
Concurrency:<br />Number of Threads==并行:<br />线程数
Childs==子进程
Average<br />Block Time<br />Reading==平均<br />读取阻塞<br />时间
Average<br />Exec Time==平均运行时间
Average<br />Block Time<br />Writing==平均<br />写阻塞<br />时间
Total<br />Cycles==运行次数
Full Description==完整描述
#-----------------------------
#File: PerformanceSearch_p.html
#---------------------------
Performance Settings of Search Sequence==搜索时间性能设置
Search Sequence Timing==搜索时间测量
Timing results of latest search request:==最近一次搜索请求时间测量结果:
Query==请求
Event<==事件<
Comment<==注释<
Time<==时间<
Duration \(ms\)==耗时(毫秒)
Result-Count==结果数目
The network picture below shows how the latest search query was solved by asking corresponding peers in the DHT:==下图显示了通过询问DHT中peer解析的最近搜索请求情况:
red -> request list alive==红色 -> 活动请求列表
green -> request has terminated==绿色 -> 已终结请求列表
grey -> the search target hash order position\(s\) \(more targets if a dht partition is used\)<==灰色 -> 搜索目标hash序列位置(如果使用dht会产生更多目标)<
"Search event picture"=="搜索时间图况"
#-----------------------------
#File: ProxyIndexingMonitor_p.html
#---------------------------
Indexing with Proxy==代理索引
YaCy can be used to 'scrape' content from pages that pass the integrated caching HTTP proxy.==YaCy能够通过集成缓存HTTP代理进行搜索.
When scraping proxy pages then <strong>no personal or protected page is indexed</strong>;==当通过代理进行搜索时不会索引<strong>私有或者受保护页面</strong>;
# This is the control page for web pages that your peer has indexed during the current application run-time==Dies ist die Kontrollseite für Internetseiten, die Ihr Peer während der aktuellen Sitzung
# as result of proxy fetch/prefetch.==durch Besuchen einer Seite indexiert.
# No personal or protected page is indexed==Persönliche Seiten und geschütze Seiten werden nicht indexiert
those pages are detected by properties in the HTTP header \(like Cookie-Use, or HTTP Authorization\)==通过检测HTTP头部属性(比如cookie用途或者http认证)
or by POST-Parameters \(either in URL or as HTTP protocol\)==或者提交参数(链接或者http协议)
and automatically excluded from indexing.==能够检测出此类网页并在索引时排除.
Proxy Auto Config:==自动配置代理:
this controls the proxy auto configuration script for browsers at http://localhost:8090/autoconfig.pac==这会影响浏览器代理自动配置脚本 http://localhost:8090/autoconfig.pac
.yacy-domains only==仅 .yacy 域名
whether the proxy should only be used for .yacy-Domains==代理是否只对 .yacy 域名有效.
Proxy pre-fetch setting:==代理预读设置:
this is an automated html page loading procedure that takes actual proxy-requested==这是一个自动预读网页的过程
URLs as crawling start points for crawling.==期间会将请求代理的URL作为crawl起始点.
Prefetch Depth==预读深度
A prefetch of 0 means no prefetch; a prefetch of 1 means to prefetch all==设置为0则不预读; 设置为1预读所有嵌入链接,
embedded URLs, but since embedded image links are loaded by the browser==但是嵌入图像链接是由浏览器读取,
this means that only embedded href-anchors are prefetched additionally.==这意味着只预读嵌入式链接的顶层部分.
Store to Cache==存储至缓存
It is almost always recommended to set this on. The only exception is that you have another caching proxy running as secondary proxy and YaCy is configured to used that proxy in proxy-proxy - mode.==推荐打开此项设置. 唯一的例外是您有另一个缓存代理作为二级代理并且YaCy设置为使用'代理到代理'模式.
Do Local Text-Indexing==进行本地文本索引
If this is on, all pages \(except private content\) that passes the proxy is indexed.==如果打开此项设置, 所有通过代理的网页(除了私有内容)都会被索引.
Do Local Media-Indexing==进行本地媒体索引
This is the same as for Local Text-Indexing, but switches only the indexing of media content on.==与本地文本索引类似, 但是仅当'索引媒体内容'打开时有效.
Do Remote Indexing==进行远程索引
If checked, the crawler will contact other peers and use them as remote indexers for your crawl.==如果被选中, crawler会联系其他peer并将之作为远程索引器.
If you need your crawling results locally, you should switch this off.==如果仅需要本地索引结果, 可以关闭此项.
Only senior and principal peers can initiate or receive remote crawls.==只有高级peer和主要peer能初始化和接收远端crawl.
Please note that this setting only take effect for a prefetch depth greater than 0.==请注意, 此设置仅在预读深度大于0时有效.
Proxy generally==代理杂项设置
Path==路径
The path where the pages are stored \(max. length 300\)==存储页面的路径(最大300个字符长度)
Size</label>==大小</label>
The size in MB of the cache.==缓存大小(MB).
"Set proxy profile"=="保存设置"
The file DATA/PLASMADB/crawlProfiles0.db is missing or corrupted.==文件 DATA/PLASMADB/crawlProfiles0.db 丢失或者损坏.
Please delete that file and restart.==请删除此文件并重启.
Pre-fetch is now set to depth==预读深度现为
Caching is now \#\(caching\)\#off\:\:on\#\(/caching\)\#.==缓存现已 #(caching)#关闭::打开#(/caching)#.
Local Text Indexing is now \#\(indexingLocalText\)\#off::on==本地文本索引现已 #(indexingLocalText)#关闭::打开
Local Media Indexing is now \#\(indexingLocalMedia\)\#off::on==本地媒体索引现已 #(indexingLocalMedia)#关闭::打开
Remote Indexing is now \#\(indexingRemote\)\#off::on==远程索引现已 #(indexingRemote)#关闭::打开
Cachepath is now set to \'\#\[return\]\#\'.</strong> Please move the old data in the new directory.==缓存路径现为 '#[return]#'.</strong> 请将旧文件移至此目录.
Cachesize is now set to \#\[return\]\#MB.==缓存大小现为 #[return]#MB.
Changes will take effect after restart only.==改变仅在重启后生效.
An error has occurred:==发生错误:
You can see a snapshot of recently indexed pages==你可以在
on the==
Page.==页面查看最近索引页面快照.
#-----------------------------
#File: QuickCrawlLink_p.html
#---------------------------
Quick Crawl Link==快速crawl链接
Quickly adding Bookmarks:==快速添加书签:
Simply drag and drop the link shown below to your Browsers Toolbar/Link-Bar.==仅需拖动以下链接至浏览器工具栏/书签栏.
If you click on it while browsing, the currently viewed website will be inserted into the YaCy crawling queue for indexing.==如果在浏览网页时点击, 当前查看页面会被插入到crawl队列已用于索引
Crawl with YaCy==用YaCy进行crawl
Title:==标题:
Link:==链接:
Status:==状态:
URL successfully added to Crawler Queue==已成功添加链接到crawl队列.
Malformed URL==异常链接
Unable to create new crawling profile for URL:==创建链接crawl信息失败:
Unable to add URL to crawler queue:==添加链接到crawl队列失败:
#-----------------------------
#File: Ranking_p.html
#---------------------------
Ranking Configuration==排名配置
The document ranking influences the order of the search result entities.==排名影响到搜索结果的排列顺序.
A ranking is computed using a number of attributes from the documents that match with the search word.==通过计算所有符合搜索关键字的文件属性, 从而得到排名.
The attributes are first normalized over all search results and then the normalized attribut is multiplied with the ranking coefficient computed from this list.==第一次搜索时, 所有结果的文件属性会被初始化, 然后计算列表中排名系数并更改这些数值.
The ranking coefficient grows exponentially with the ranking levels given in the following table.==排名系数根据下表排名级别呈指数增长.
If you increase a single value by one, then the strength of the parameter doubles.==如果值加1, 则参数影响强度加倍.
Pre-Ranking==预排名
# Aktuell sind die Werte und Hover over Information in der Ranking_p.java hartcodiert und können nicht übersetzt werden
#
#Date==Datum
#a higher ranking level prefers younger documents.==Ein höherer Ranking Level bevorzugt jüngere Dokumente
#The age of a document is measured using the date submitted by the remote server as document date==Das Alter eines Dokuments wird gemessen anhand des Dokument Datums das der Remote Server übermittelt
There are two ranking stages:==有两个排名阶段:
first all results are ranked using the pre-ranking and from the resulting list the documents are ranked again with a post-ranking.==首先对搜索结果进行一次排名, 然后再对首次排名结果进行二次排名.
The two stages are separated because they need statistical information from the result of the pre-ranking.==两个结果是分开的, 因为它们都需要上次排名的统计结果.
Post-Ranking==二次排名
#Application Of Prefer Pattern==Anwendung eines bevorzugten Musters
#a higher ranking level prefers documents where the url matches the prefer pattern given in a search request.==Ein höherer Ranking Level bevorzugt Dokumente deren URL auf das bevorzugte Muster einer Suchanfrage passt.
"Set as Default Ranking"=="保存为默认排名"
"Re-Set to Built-In Ranking"=="重置排名设置"
#-----------------------------
#File: RemoteCrawl_p.html
#---------------------------
Remote Crawl Configuration==远端crawl配置
#>Remote Crawler<==>Remote Crawler<
The remote crawler is a process that requests urls from other peers.==远端crawler是一个处理来自其他peer链接请求的进程.
Peers offer remote-crawl urls if the flag \'Do Remote Indexing\'==如果选中了'进行远程索引', 则peer在开始crawl时
is switched on when a crawl is started.==能够进行远端crawl.
Remote Crawler Configuration==远端crawler配置
>Accept Remote Crawl Requests<==>接受远端crawl请求<
Perform web indexing upon request of another peer.==收到另一peer请求时进行网页索引.
Load with a maximum of==最多每分钟读取
pages per minute==个页面
"Save"=="保存"
Crawl results will appear in the==crawl会出现在
>Crawl Result Monitor<==>crawl结果监视<
Peers offering remote crawl URLs==提供远端crawl的peer
If the remote crawl option is switched on, then this peer will load URLs from the following remote peers:==如果选中了远端crawl选项, 则本机peer会从以下远端peer读取链接:
#>Name<==>Name<
#>Remote<br/>Crawl<==>Remote<br/>Crawl<
#>Release/<br/>SVN<==>Version/<br/>SVN<
>PPM<==>==>页面每分钟(PPM)<
>QPH<==>请求每小时(QPH)<
>Last<br/>Seen<==>上次<br/>出现<
>UTC</strong><br/>Offset<==>UTC</strong><br/>时区<
#>Uptime<==>Uptime<
#>Links<==>Links<
#>RWIs<==>RWIs<
>Age<==>寿命<
#-----------------------------
#File: Settings_p.html
#---------------------------
Advanced Settings==高级设置
If you want to restore all settings to the default values,==如果要恢复所有默认设置,
but <strong>forgot your administration password</strong>, you must stop the proxy,==但是忘记了<strong>管理员密码</strong>, 则您必须首先停止代理,
delete the file 'DATA/SETTINGS/yacy.conf' in the YaCy application root folder and start YaCy again.==删除YaCy根目录下的 'DATA/SETTINGS/yacy.conf' 并重启.
#Performance Settings of Queues and Processes==Performanceeinstellungen für Puffer und Prozesse
Performance Settings of Busy Queues==忙碌队列性能设置
Performance of Concurrent Processes==并行进程性能
Performance Settings for Memory==内存性能设置
Performance Settings of Search Sequence==搜索时间性能设置
### --- Those 3 items are removed in latest SVN BEGIN
Viewer and administration for database tables==查看与管理数据库表格
Viewer for Peer-News==查看peer新闻
Viewer for Cookies in Proxy==查看代理cookie
### --- Those 3 items are removed in latest SVN END
Server Access Settings==服务器访问设置
Proxy Access Settings==代理访问设置
#Content Parser Settings==Inhalt Parser Einstellungen
Crawler Settings==Crawler设置
HTTP Networking==HTTP网络
#Remote Proxy \(optional\)==Remote Proxy (optional)
Seed Upload Settings==Seed上传设置
Message Forwarding \(optional\)==消息发送(可选)
#-----------------------------
#File: Settings_Crawler.inc
#---------------------------
Generic Crawler Settings==普通crawler设置
Connection timeout in ms==连接超时(ms)
means unlimited==表示无超时
HTTP Crawler Settings:==HTTP crawler设置:
Maximum Filesize==文件最大尺寸
FTP Crawler Settings==FTP crawler设置
SMB Crawler Settings==SMB crawler设置
Local File Crawler Settings==本地文件crawler设置
Maximum allowed file size in bytes that should be downloaded==允许下载的最大文件尺寸(字节)
Larger files will be skipped==超出此限制的文件将被忽略
Please note that if the crawler uses content compression, this limit is used to check the compressed content size==请注意, 如果crawler使用内容压缩, 则此限制对压缩后文件大小有效.
Submit==提交
Changes will take effect immediately==改变立即生效
#-----------------------------
#File: Settings_Http.inc
#---------------------------
HTTP Networking==HTTP网络
Transparent Proxy==透明代理
With this you can specify if YaCy can be used as transparent proxy.==选此指定YaCy作为透明代理.
Hint: On linux you can configure your firewall to transparently redirect all http traffic through yacy using this iptables rule==提示: Linux系统中, 您可以使用如下iptables规则转发所有http流量
Connection Keep-Alive==保持连接
With this you can specify if YaCy should support the HTTP connection keep-alive feature.==选此指定YaCy支持HTTP连接保持特性.
Send "Via" Header==发送"Via"头
Specifies if the proxy should send the <a href="http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.45">Via</a>==选此指定代理是否发送<a href="http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.45">"Via"HTTP头</a>
http header according to RFC 2616 Sect 14.45.==根据RFC 2616 Sect14.45.
Send "X-Forwarded-For" Header== 发送"X-Forward-For"头
Specifies if the proxy should send the X-Forwarded-For http header.==指定代理是否发送"X-Forward-For"头.
"Submit"=="提交"
Changes will take effect immediately.==改变立即生效.
#-----------------------------
#File: Settings_Proxy.inc
#---------------------------
YaCy can use another proxy to connect to the internet. You can enter the address for the remote proxy here:==YaCy能够通过第二代理连接到网络, 在此输入远程代理地址.
Use remote proxy</label>==使用远程代理</label>
Enables the usage of the remote proxy by yacy==打开以支持远程代理
Use remote proxy for yacy &lt;-&gt; yacy communication==为YaCy &lt;-&gt; YaCy 通信使用代理
Specifies if the remote proxy should be used for the communication of this peer to other yacy peers.==选此指定远程代理是否支持YaCy peer间通信.
<em>Hint:</em> Enabling this option could cause this peer to remain in junior status.==<em>提示:</em> 打开此选项后本地peer会被置为次级peer.
Use remote proxy for HTTPS==为HTTPS使用远程代理
Specifies if YaCy should forward ssl connections to the remote proxy.==选此指定YaCy是否使用SSL代理.
Remote proxy host==远程代理主机
The ip address or domain name of the remote proxy==远程代理的IP地址或者域名
Remote proxy port==远程代理端口
the port of the remote proxy==远程代理使用的端口
Remote proxy user==远程代理用户
Remote proxy password==远程代理用户密码
No-proxy adresses==无代理地址
IP addresses for which the remote proxy should not be used==指定不使用代理的IP地址
"Submit"=="提交"
Changes will take effect immediately.==改变立即生效.
#-----------------------------
#File: Settings_ProxyAccess.inc
#---------------------------
Proxy Access Settings==代理访问设置
These settings configure the access method to your own http proxy and server.==设定http代理和服务器的访问方式.
All traffic is routed throug one single port, for both proxy and server.==代理和服务器流量均从同一端口流过.
Server/Proxy Port Configuration==服务器/代理 端口设置
The socket addresses where YaCy should listen for incoming connections from other YaCy peers or http clients.==指定YaCy需要监听的socket地址.
You have four possibilities to specify the address:==可以设置以下四个地址:
defining a port only==仅指定一个端口
<em>e.g. 8090</em>==<em>比如 8090</em>
defining IP address and port==指定IP地址和端口
<em>e.g. 192.168.0.1:8090</em>==<em>比如 192.168.0.1:8090</em>
defining host name and port==指定域名和端口
<em>e.g. home:8090</em>==<em>比如 home:8090</em>
defining interface name and port==指定网络接口和端口
<em>e.g. #eth0:8090</em>==<em>z.B. #eth0:8090</em>
Hint: Dont forget to change your firewall configuration after you have changed the port.==提示: 改变端口后请更改对应防火墙设置.
Proxy and http-Server Administration Port==代理和http服务器管理端口
Changes will take effect in 5-10 seconds==改变在5-10秒后生效
Server Access Restrictions==服务器访问限制
You can restrict the access to this proxy/server using a two-stage security barrier:==使用两层安全屏障限制到此代理/服务器的访问:
define an <em>access domain</em> with a list of granted client IP-numbers or with wildcards==定义一个带有授权IP名单或者通配符的<em>访问域</em>
define an <em>user account</em> with an user:password - pair==创建一个需要密码的<em>用户账户</em>
This is the account that restricts access to the proxy function.==这是一个限制代理访问功能的账户.
You probably don't want to share the proxy to the internet, so you should set the==如果不想在互联网上共享代理,
IP-Number Access Domain to a pattern that corresponds to you local intranet.==请定义一个对应本地局域网的IP访问域表达式.
The default setting should be right in most cases. If you want, you can also set a proxy account==默认设置适用于大多数情况. 如果需要共享代理,
so that every proxy user must authenticate first, but this is rather unusual.==请先设置需要授权的代理账户.
IP-Number filter==IP地址过滤
Use <a==使用 <a
#-----------------------------
#File: Settings_Seed.inc
#---------------------------
Seed Upload Settings==seed上传设置
With these settings you can configure if you have an account on a public accessible==如果您有一个公共服务器的账户, 可在此设置
server where you can host a seed-list file.==seed列表文件相关选项.
General Settings:==通用设置:
If you enable one of the available uploading methods, you will become a principal peer.==如果peer使用了以下某种上传方式, 则本机peer会成为主要peer.
Your peer will then upload the seed-bootstrap information periodically,==您的peer会定期上传seed启动信息,
but only if there have been changes to the seed-list.==前提是seed列表有变更.
Upload Method==上传方式
"Submit"=="提交"
Retry Uploading==重试上传
Here you can specify which upload method should be used.==在此指定上传方式.
Select 'none' to deactivate uploading.==选择'none'关闭上传
The URL that can be used to retrieve the uploaded seed file, like==能够上传seed文件的链接, 比如
#-----------------------------
#File: Settings_Seed_UploadFile.inc
#---------------------------
Store into filesystem:==存储至文件系统:
You must configure this if you want to store the seed-list file onto the file system.==如果要将seed列表文件存储至文件系统, 请先配置此选项.
File Location==文件位置
Here you can specify the path within the filesystem where the seed-list file should be stored.==在此指定文件系统内保存seed列表文件的路径.
"Submit"=="提交"
#-----------------------------
#File: Settings_Seed_UploadFtp.inc
#---------------------------
Uploading via FTP:==通过FTP上传:
This is the account for a FTP server where you can host a seed-list file.==此账户能够访问FTP服务器以存储seed列表文件.
If you set this, you will become a principal peer.==如果设置了此选项, 本地peer会被置为主要peer.
Your peer will then upload the seed-bootstrap information periodically,==您的peer会定期上传seed启动信息,
but only if there had been changes to the seed-list.==前提是seed列表有变更.
The host where you have a FTP account, like==ftp服务器, 比如
Path</label>==路径</label>
The remote path on the FTP server, like==ftp服务器上传路径, 比如
Missing sub-directories are NOT created automatically.==不会自动创建缺少的子目录.
Username==用户名
Your log-in at the FTP server==ftp服务器用户名
Password</label>==密码</label>
The password==用户密码
"Submit"=="提交"
#-----------------------------
#File: Settings_Seed_UploadScp.inc
#---------------------------
Uploading via SCP:==通过SCP上传:
This is the account for a server where you are able to login via ssh.==设置通过ssh访问服务器的账户.
#Server==Server
The host where you have an account, like 'my.host.net'==主机, 比如'my.host.net'
#Server&nbsp;Port==Server&nbsp;Port
The sshd port of the host, like '22'==ssh端口, 比如'22'
Path</label>==路径</label>
The remote path on the server, like '~/yacy/seed.txt'. Missing sub-directories are NOT created automatically.==ssh服务器上传路径, 比如'~/yacy/seed.txt'. 不会自动创建缺少的子目录.
Username==用户名
Your log-in at the server==ssh服务器用户名
Password</label>==密码</label>
The password==用户密码
"Submit"=="提交"
#-----------------------------
#File: Settings_ServerAccess.inc
#---------------------------
Server Access Settings==服务器访问设置
IP-Number filter:==IP地址过滤:
Here you can restrict access to the server.==通过此限制访问服务器的IP.
By default, the access is not limited,==默认情况下, 不对访问作限制,
because this function is needed to spawn the p2p index-sharing function.==否则会影响p2p索引共享功能.
If you block access to your server \(setting anything else than \'\*\'\), then you will also be blocked==如果作了访问限制(不要设置'*'),
from using other peers' indexes for search service.==不能使用其他peer的索引.
However, blocking access may be correct in enterprise environments where you only want to index your==然而, 在企业环境中, 如果仅需要索引公司内部网页,
company's own web pages.==则可作相应限制.
staticIP \(optional\):==静态IP (可选):
<strong>The staticIP can help that your peer can be reached by other peers in case that your==<strong>如果您在防火墙或者代理后,
peer is behind a firewall or proxy.</strong> You can create a tunnel through the firewall/proxy==静态IP能够确保其他peer能够找到您.</strong> 您可以创建一个穿过防火墙/代理的通道,
\(look out for 'tunneling through https proxy with connect command'\) and create==(查找"通过connect命令创建https代理通道")
an access point for incoming connections.==以给其他peer提供访问点.
This access address can be set here \(either as IP number or domain name\).==在此设置访问地址(IP地址或者域名).
If the address of outgoing connections is equal to the address of incoming connections,==如果流出链接的地址和流入链接的相同,
you don't need to set anything here, please leave it blank.==请留空此栏.
ATTENTION: Your current IP is recognized as "#\[clientIP\]#".==注意: 当前您的为"#[clientIP]#".
If the value you enter here does not match with this IP,==如果您输入的IP与此IP不符,
you will not be able to access the server pages anymore.==那么您就不能访问服务器页面了.
value="Submit"==value="提交"
#-----------------------------
#File: SettingsAck_p.html
#---------------------------
YaCy \'\#\[clientname\]\#\': Settings Acknowledge==YaCy '#[clientname]#': 设置
Settings Receipt:==菜单设置:
No information has been submitted==未提交信息.
Error with submitted information.==提交信息发生错误.
Nothing changed.</p>==无任何改变.</p>
The user name must be given.==必须给出用户名.
Your request cannot be processed.==不能响应请求.
The password redundancy check failed. You have probably misstyped your password.==密码冗余检查错误.
Shutting down.</strong><br />Application will terminate after working off all crawling tasks.==正在关闭</strong><br />所有crawl任务完成后程序会关闭.
Your administration account setting has been made.==已创建管理账户设置.
Your new administration account name is \#\[user\]\#. The password has been accepted.<br />If you go back to the Settings page, you must log-in again.==新帐户名是 #[user]#. 密码输入正确.<br />如果返回设置页面, 需要再次输入密码.
Your proxy access setting has been changed.==代理访问设置已改变.
Your proxy account check has been disabled, since you did not supply a password.==不能进行代理账户检查, 密码不正确.
The new proxy IP filter is set to==代理IP过滤设置为
The proxy port is:==代理端口号:
Port rebinding will be done in a few seconds.==端口在几秒后绑定完成.
You can reach your YaCy server under the new location==可以通过新位置访问YaCy服务器:
Your proxy access setting has been changed.==代理访问设置已改变.
Your server access filter is now set to==服务器访问过滤为
Auto pop-up of the Status page is now <strong>disabled</strong>==自动弹出状态页面<strong>关闭.</strong>
Auto pop-up of the Status page is now <strong>enabled</strong>==自动弹出状态页面<strong>打开.</strong>
You are now permanently <strong>online</strong>.==您现在处于永久<strong>在线状态</strong>.
After a short while you should see the effect on the====一会儿可以在
status</a> page.==Status</a> 页面看到变化.
The Peer Name is:==peer名:
Your static Ip\(or DynDns\) is:==静态IP(或DynDns)为:
Seed Settings changed.\#\(success\)\#::You are now a principal peer.==seed设置已改变.#(success)#::本地peer已成为主要peer.
Seed Settings changed, but something is wrong.==seed设置已改变, 但是未完全成功.
Seed Uploading was deactivated automatically.==seed上传自动关闭.
Please return to the settings page and modify the data.==请返回设置页面修改参数.
The remote-proxy setting has been changed==远程代理设置已改变.
The new setting is effective immediately, you don't need to re-start.==新设置立即生效.
The submitted peer name is already used by another peer. Please choose a different name.</strong> The Peer name has not been changed.==提交的peer名已存在, 请更改.</strong> peer名未改变.
Your Peer Language is:==peer语言:
The submitted peer name is not well-formed. Please choose a different name.</strong> The Peer name has not been changed.
Peer names must not contain characters other than (a-z, A-Z, 0-9, '-', '_') and must not be longer than 80 characters.
#The new parser settings where changed successfully.==Die neuen Parser Einstellungen wurden erfolgreich gespeichert.
Parsing of the following mime-types was enabled:
Seed Upload method was changed successfully.==seed上传方式改变成功.
You are now a principal peer.==本地peer已成为主要peer.
Seed Upload Method:==seed上传方式:
Seed File URL:==seed文件URL:
Your proxy networking settings have been changed.==代理网络设置已改变.
Transparent Proxy Support is:==透明代理支持:
Connection Keep-Alive Support is:==连接保持支持:
Your message forwarding settings have been changed.==消息发送设置已改变.
Message Forwarding Support is:==消息发送支持:
Message Forwarding Command:==消息:
Recipient Address:==收件人地址:
Please return to the settings page and modify the data.==请返回设置页面修改参数.
You are now <strong>event-based online</strong>.==您现在处于<strong>事件驱动在线</strong>.
After a short while you should see the effect on the==查看变化
You are now in <strong>Cache Mode</strong>.==您现在处于<strong>Cache模式</strong>.
Only Proxy-cache ist available in this mode.==此模式下仅代理缓存可用.
After a short while you should see the effect on the==查看变化
You can now go back to the==现在可返回
Settings</a> page if you want to make more changes.==设置</a> 页面, 如果需要更改更多参数的话.
You can reach your YaCy server under the new location==现在可以通过新位置访问YaCy服务器:
#-----------------------------
#File: Settings_MessageForwarding.inc
#---------------------------
Message Forwarding==消息发送
With this settings you can activate or deactivate forwarding of yacy-messages via email.==此设置能打开或关闭电邮发送yacy消息.
Enable message forwarding==打开消息发送
Enabling/Disabling message forwarding via email.==打开/关闭email发送.
Forwarding Command==发送命令
The command-line program that should be used to forward the message.<br />==将用于发送消息的命令行程序.<br />
Forwarding To==发送给
The recipient email-address.<br />==收件人email地址.<br />
e.g.:==比如:
"Submit"=="提交"
Changes will take effect immediately.==改变立即生效.
#-----------------------------
#File: sharedBlacklist_p.html
#---------------------------
Shared Blacklist==共享黑名单
Add Items to Blacklist==添加条目到黑名单
Unable to store the items into the blacklist file:==不能存储条目到黑名单文件:
#File Error! Wrong Path?==Datei Fehler! Falscher Pfad?
YaCy-Peer &quot;<span class="settingsValue">\#\[name\]\#</span>&quot; not found.==YaCy peer&quot;<span class="settingsValue">#[name]#</span>&quot; 未找到.
not found or empty list.==未找到或者列表为空.
Wrong Invocation! Please invoke with==调用错误! 请使用配合
Blacklist source:==黑名单源:
Blacklist target:==黑名单目的:
Blacklist item==黑名单条目
"select all"=="全部选择"
"deselect all"=="全部反选"
value="add"==value="添加"
#-----------------------------
#File: Status.html
#---------------------------
Console Status==控制台状态
Log-in as administrator to see full status==登录管理用户以查看完整状态
Welcome to YaCy!==欢迎使用YaCy!
Your settings are _not_ protected!</strong>==您的设置未受保护!</strong>
Please open the <a href="ConfigAccounts_p.html">accounts configuration</a> page <strong>immediately</strong>==请打开<a href="ConfigAccounts_p.html">账户设置</a> <strong>页面</strong>
and set an administration password.==并设置管理密码.
You have not published your peer seed yet. This happens automatically, just wait.==尚未发布您的peer seed. 将会自动发布, 请稍候.
The peer must go online to get a peer address.==peer必须上线获得peer地址.
You cannot be reached from outside.==外部不能访问您的peer.
A possible reason is that you are behind a firewall, NAT or Router.==很可能是您在防火墙, NAT或者路由的后面.
But you can <a href="index.html">search the internet</a> using the other peers'==但是您依然能进行<a href="index.html">搜索</a>
global index on your own search page.==, 需要通过其他peer的全球索引.
"bad"=="坏"
"idea"="主意"
"good"="好"
"Follow YaCy on Twitter"=="在Twitter上关注YaCy"
We encourage you to open your firewall for the port you configured \(usually: 8090\),==我们推荐您开发防火墙端口(通常是: 8090),
or to set up a 'virtual server' in your router settings \(often called DMZ\).==或者在路由设置中(DMZ)建立一个"虚拟服务器".
Please be fair, contribute your own index to the global index.==请公平地贡献您的索引给全球索引.
Free disk space is lower than \#\[minSpace\]\#. Crawling has been disabled. Please fix==空闲磁盘空间低于 #[minSpace]#. crawl已被关闭,
it as soon as possible and restart YaCy.==请尽快修复并重启YaCy.
Free memory is lower than \#\[minSpace\]\#. DHT has been disabled. Please fix==空闲内存低于 #[minSpace]#. DHT已被关闭,
Latest public version is==最新版本为
You can download a more recent version of YaCy. Click here to install this update and restart YaCy:==您可以下载最新版本YaCy, 点此进行升级并重启:
#"Update YaCy"=="Update YaCy"
Install YaCy==安装YaCy
You are running a server in senior mode and you support the global internet index,==服务器运行在高级模式, 并支持全球索引,
which you can also <a href="index.html">search yourself</a>.==您也能进行<a href="index.html">本地搜索</a>.
You have a principal peer because you publish your seed-list to a public accessible server==您拥有一个主要peer, 因为您向公共服务器公布了您的seed列表,
where it can be retrieved using the URL==可使用此URL进行接收:
Your Web Page Indexer is idle. You can start your own web crawl <a href="CrawlStartSite_p.html">here</a>==网页索引器当前空闲. 可以点击<a href="CrawlStartSite_p.html">这里</a>开始网页crawl
Your Web Page Indexer is busy. You can <a href="Crawler_p.html">monitor your web crawl</a> here.==网页索引器当前忙碌. 点击<a href="Crawler_p.html">这里</a>查看状态.
#-----------------------------
#File: Status_p.inc
#---------------------------
#System Status==System Status
Process</dt>==进程</dt>
Unknown==未知
Uptime==运行时间
System Resources==系统资源
Processors:==处理器:
Protection==保护
Password is missing==无密码
password-protected==受密码保护
Unrestricted access from localhost==本地无限制访问
Address</dt>==地址</dt>
peer address not assigned==未分配peer地址
Public Address:==公共地址:
YaCy Address:==YaCy地址:
#Peer Host==Peer Host
#Port Forwarding Host==Port Forwarding Host
not used==未使用
broken==已损坏
connected==已连接
#Remote Proxy==Remote Proxy
not used==未使用
Used for YaCy -> YaCy communication:==用于YaCy -> YaCy通信:
WARNING:==注意:
You do this on your own risk.==此动作危险.
If you do this without YaCy running on a desktop-pc or without Java 6 installed, this will possibly break startup.==如果您不是在台式机上或者已安装Java6的机器上运行, 可能会破坏开机程序.
In this case, you will have to edit the configuration manually in DATA/SETTINGS/yacy.conf==在此情况下, 您需要手动修改配置文件 DATA/SETTINGS/yacy.conf
>Experimental<==>实验性<
Yes==是
No==否
Auto-popup on start-up==启动时自动弹出
Disabled==关闭
Enable\]==打开]
Enabled <a==打开 <a
Disable\]==关闭]
Memory Usage==内存使用
free:==空闲:
total:==全部:
max:==最大:
Configure==配置
Reset</a>==重启</a>
Incoming Connections==流入连接
Active:==活动:
#Max:Max:
#Indexing Queue==Indexier Puffer
Loader Queue==加载器队列
paused==已暂停
>Queues<==>队列<
Local Crawl==本地crawl
Remote triggered Crawl==引入远端crawl
Pre-Queueing==预排序
Seed server==seed服务器
Configure==配置
Enabled: Updating to server==已打开, 与服务器同步:
Last upload: #\[lastUpload\]# ago.==Letzte Aktualisierung vor: #\[lastUpload\]#
Enabled: Updating to file==已打开: 与文件同步
#-----------------------------
#File: Steering.html
#---------------------------
Steering</title>==向导</title>
Checking peer status...==正在检查peer状态...
Peer is online again, forwarding to status page...==peer再次上线, 正在传输状态...
Peer is not online yet, will check again in a few seconds...==peer尚未上线, 几秒后重新检测...
No action submitted==未提交动作
Go back to the <a href="Settings_p.html">Settings</a> page==将返回<a href="Settings_p.html">设置</a>页面
Your system is not protected by a password==您的系统未受密码保护
Please go to the <a href="ConfigAccounts_p.html">User Administration</a> page and set an administration password.==请在<a href="ConfigAccounts_p.html">用户管理</a>页面设置管理密码.
You don't have the correct access right to perform this task.==无执行此任务权限.
Please log in.==请登录.
You can now go back to the <a href="Settings_p.html">Settings</a> page if you want to make more changes.==您现在可以返回<a href="Settings_p.html">设置</a>页面进行详细设置.
See you soon!==See you soon!
Just a moment, please!==请稍候.
Application will terminate after working off all scheduled tasks.==程序在所有任务完成后将停止,
Then YaCy will restart.==然后YaCy会重新启动.
If you can't reach YaCy's interface after 5 minutes restart failed.==如果5分钟后不能访问此页面说明重启失败.
Installing release==正在安装
YaCy will be restarted after installation==YaCy在安装完成后会重新启动
#-----------------------------
#File: Supporter.html
#---------------------------
Supporter<==参与者<
"Please enter a comment to your link recommendation. (Your Vote is also considered without a comment.)"
Supporter are switched off for users without authorization==未授权用户不属于参与者范畴
"bookmark"=="书签"
"Add to bookmarks"=="添加到书签"
"positive vote"=="好评"
"Give positive vote"=="给予好评"
"negative vote"=="差评"
"Give negative vote"=="给予差评"
provided by YaCy peers with an URL in their profile. This shows only URLs from peers that are currently online.==由各peer提供. 仅显示所有peer中当前在线链接.
#-----------------------------
#File: Surftips.html
#---------------------------
Surftips</title>==建议</title>
Surftips</h2>==建议</h2>
Surftips are switched off==建议已关闭
title="bookmark"==title="书签"
alt="Add to bookmarks"==alt="添加到书签"
title="positive vote"==title="好评"
alt="Give positive vote"==alt="给予好评"
title="negative vote"==title="差评"
alt="Give negative vote"==alt="给予差评"
YaCy Supporters<==YaCy参与者<
>a list of home pages of yacy users<==>显示YaCy用户<
provided by YaCy peers using public bookmarks, link votes and crawl start points==由使用公共书签, 链接评价和crawl起始点的peer提供
"Please enter a comment to your link recommendation. \(Your Vote is also considered without a comment.\)"=="输入推荐链接备注. (可留空.)"
"authentication required"=="需要认证"
Hide surftips for users without autorization==隐藏非认证用户的建议功能
Show surftips to everyone==所有人均可使用建议
#-----------------------------
#File: Table_API_p.html
#---------------------------
: Peer Steering==: Peer向导
Steering of API Actions<==API动作向导<
This table shows actions that had been issued on the YaCy interface==此表显示YaCy用于
to change the configuration or to request crawl actions.==改变配置或者处理crawl请求的动作接口函数.
These recorded actions can be used to repeat specific actions and to send them==它们用于重复执行某一指定动作,
to a scheduler for a periodic execution.==或者用于周期执行一系列动作.
>Recorded Actions<==>已记录动作<
"next page"=="下一页"
"previous page"=="上一页"
"next page"=="下一页"
"previous page"=="上一页"
of \#\[of\]\#== 共 #[of]#
>Date==>日期
>Type==>类型
>Comment==>注释
Call<br/>Count<==调用<br/>次数<
Recording<==正在记录<
Last&nbsp;Exec==上次&nbsp;执行
Next&nbsp;Exec==下次&nbsp;执行
>Scheduler<==>定时器<
#>URL<==>URL
>no repetition<==>无安排<
>activate scheduler<==>激活定时器<
"Execute Selected Actions"=="执行选中活动"
"Delete Selected Actions"=="删除选中活动"
>Result of API execution==>API执行结果
#>Status<==>Status>
#>URL<==>URL<
>minutes<==>分钟<
>hours<==>小时<
>days<==>天<
Scheduled actions are executed after the next execution date has arrived within a time frame of \#\[tfminutes\]\# minutes.==已安排动作会在 #[tfminutes]# 分钟后执行.
#-----------------------------
#File: Table_RobotsTxt_p.html
#---------------------------
Table Viewer==表格查看
The information that is presented on this page can also be retrieved as XML==此页信息也可表示为XML
Click the API icon to see the XML.==点击API图标查看XML.
To see a list of all APIs, please visit the <a href="http://www.yacy-websuche.de/wiki/index.php/Dev:API">API wiki page</a>.==查看所有API, 请访问<a href="http://www.yacy-websuche.de/wiki/index.php/Dev:API">API Wiki</a>.
>robots.txt table<==>robots.txt 列表<
#-----------------------------
### This Tables section is removed in current SVN Versions
#File: Tables_p.html
#---------------------------
Table Administration==表格管理
Table Selection==选择表格
Select Table:==选择表格:
#"Show Table"=="Zeige Tabelle"
show max.==显示最多.
>all<==>全部<
entries,==个条目,
search rows for==搜索内容
"Search"=="搜索"
Table Editor: showing table==表格编辑器: 显示表格
#PK==Primärschlüssel
"Edit Selected Row"=="编辑选中行"
"Add a new Row"=="添加新行"
"Delete Selected Rows"=="删除选中行"
"Delete Table"=="删除表格"
Row Editor==行编辑器
Primary Key==主键
"Commit"=="备注"
#-----------------------------
#File: Table_YMark_p.html
#---------------------------
Table Viewer==表格查看
YMark Table Administration==YMark表格管理
Table Editor: showing table==表格编辑器: 显示表格
"Edit Selected Row"=="编辑选中行"
"Add a new Row"=="添加新行"
"Delete Selected Rows"=="删除选中行"
"Delete Table"=="删除表格"
"Rebuild Index"=="重建索引"
Primary Key==主键
>Row Editor<==>行编辑器<
"Commit"=="备注"
Table Selection==选择表格
Select Table:==选择表格:
show max. entries==显示最多条目
>all<==>所有<
Display columns:==显示列:
"load"=="载入"
Search/Filter Table==搜索/过滤表格
search rows for==搜索
"Search"=="搜索"
#>Tags<==>Tags<
>select a tag<==>选择标签<
>Folders<==>目录<
>select a folder<==>选择目录<
>Import Bookmarks<==>导入书签<
#Importer:==Importer:
#>XBEL Importer<==>XBEL Importer<
#>Netscape HTML Importer<==>Netscape HTML Importer<
"import"=="导入"
#-----------------------------
#File: terminal_p.html
#---------------------------
#YaCy System Monitor==YaCy System Monitor
Search Form==搜索页面
Crawl Start==开始crawl
Status Page==状态页面
Confirm Shutdown==确认关闭
>&lt;Shutdown==>&lt;关闭程序
Event Terminal==事件终端
Image Terminal==图形终端
#Domain Monitor==Domain Monitor
"Loading Processing software..."=="正在载入软件..."
This browser does not have a Java Plug-in.==此浏览器没有安装Java插件.
Get the latest Java Plug-in here.==在此获取.
Resource Monitor==资源监视器
Network Monitor==网络监视器
#-----------------------------
#File: Threaddump_p.html
#---------------------------
YaCy Debugging: Thread Dump==YaCy Debug: 线程Dump
Threaddump<==线程Dump<
"Single Threaddump"=="单线程Dump"
"Multiple Dump Statistic"=="多个Dump数据"
#"create Threaddump"=="Threaddump erstellen"
#-----------------------------
#File: User.html
#---------------------------
User Page==用户页面
You are not logged in.<br />==当前未登录.<br />
Username:==用户名:
Password: <input==密码: <输入
"login"=="登录"
You are currently logged in as \#\[username\]\#.==当前作为 #[username]# 登录.
You have used==可用时间已使用
minutes of your onlinetime limit of==分钟, 共
minutes per day.==分钟每天.
old Password==旧密码
new Password<==新密码<
new Password\(repetition\)==新密码(重复)
"Change"=="改变"
You are currently logged in as admin.==当前作为管理员登录.
value="logout"==value="注销"
\(after logout you will be prompted for your password again. simply click "cancel"\)==(注销后需要重新输入密码)
Password was changed.==密码已改变.
Old Password is wrong.==密码输入错误.
New Password and its repetition do not match.==新密码两次输入不匹配.
New Password is empty.==新密码为空.
#-----------------------------
#File: ViewFile.html
#---------------------------
YaCy \'\#\[clientname\]\#\': View URL Content==YaCy '#[clientname]#': 查看文件内容
View URL Content==查看链接内容
>Get URL Viewer<==>获取链接浏览器<
>URL Metadata<==>链接元数据<
#URL==URL
#Hash==Hash
Word Count==字数
Description==描述
Size==大小
View as==查看形式
#Original==Original
Plain Text==文本
Parsed Text==解析文本
Parsed Sentences==解析句子
Parsed Tokens/Words==解析令牌/字
Link List==链接列表
"Show"=="显示"
Unable to find URL Entry in DB==无法找到数据库中的链接.
Invalid URL==无效链接
Unable to download resource content.==无法下载资源内容.
Unable to parse resource content.==无法解析资源内容.
Unsupported protocol.==不支持的协议.
>Original Content from Web<==>网页原始内容<
Parsed Content==解析内容
>Original from Web<==>网页原始内容<
>Original from Cache<==>缓存原始内容<
>Parsed Tokens<==>解析令牌<
#-----------------------------
#File: ViewLog_p.html
#---------------------------
Lines==行
reversed order==倒序排列
"refresh"=="刷新"
#-----------------------------
#File: ViewProfile.html
#---------------------------
Local Peer Profile:==本地peer资料:
Remote Peer Profile==远端peer资料
Wrong access of this page==页面权限错误
The requested peer is unknown or a potential peer.==所请求peer未知或者是潜在peer.
The profile can't be fetched.==无法获取资料.
The peer==peer
is not online.==当前不在线.
This is the Profile of==资料
#Name==Name
#Nick Name==Nick Name
#Homepage==Homepage
#eMail==eMail
#ICQ==ICQ
#Jabber==Jabber
#Yahoo!==Yahoo!
#MSN==MSN
#Skype==Skype
Comment==注释
View this profile as==查看方式
> or==> 或者
#vCard==vCard
#-----------------------------
#File: Crawler_p.html
#---------------------------
Crawler Queues==Crawler队列
PPM \(Pages Per Minute\)==PPM (页面每分钟)
#Traffic \(Crawler\)==Traffic (Crawler)
RWI RAM \(Word Cache\)==RWI RAM (关键字缓存)
Error with profile management. Please stop YaCy, delete the file DATA/PLASMADB/crawlProfiles0.db==资料管理出错. 请关闭YaCy, 并删除文件 DATA/PLASMADB/crawlProfiles0.db
and restart.==后重启.
Error:==错误:
Application not yet initialized. Sorry. Please wait some seconds and repeat==抱歉, 程序未初始化, 请稍候并重复
ERROR: Crawl filter==错误: crawl过滤
does not match with==与crawl根
crawl root==不匹配
Please try again with different==请使用不同的过滤字再试一次
filter. ::==. ::
Crawling of==crawl
failed. Reason:==失败. 原因:
Error with URL input==URL输入错误
Error with file input==文件输入错误
started.==已开始.
Please wait some seconds,==请稍等,
it may take some seconds until the first result appears there.==在出现第一个搜索结果前需要几秒钟时间.
If you crawl any un-wanted pages, you can delete them <a href="IndexCreateWWWLocalQueue_p.html">here</a>.==如果您crawl了不需要的页面, 您可以 <a href="IndexCreateWWWLocalQueue_p.html">点这</a> 删除它们.
Crawl Queue:==crawl队列:
Queue</th>==队列</th>
Profile</th>==资料</th>
Initiator==发起者
Depth</th>==深度</th>
Modified Date==修改日期
Anchor Name==祖先名
#URL==URL
Delete==删除
Next update in==下次更新将在
/> seconds.==/> 秒后.
See a access timing <a href="api/latency_p.xml">here</a>==<a href="api/latency_p.xml">点这</a> 查看访问时间
Queue</th>==队列</th>
>Size==>大小
#Max==Max
#Indexing</td>==Indexieren</td>
Loader==加载器
Local Crawler==本地crawler
unlimited==无限制
#Remote Crawler==Remote Crawler
#Speed==速度
"minimum"=="最小"
"custom"=="自定义"
"maximum"=="最大"
Database==数据库
Entries==条目数
Pages \(URLs\)==页面(链接)
RWIs \(Words\)==RWIs (字)
Indicator==指示器
Level==级别
#-----------------------------
#File: WatchWebStructure_p.html
#---------------------------
The data that is visualized here can also be retrieved in a XML file, which lists the reference relation between the domains.==此页面数据显示域之间的关联关系, 能以XML文件形式查看.
With a GET-property 'about' you get only reference relations about the host that you give in the argument field for 'about'.==使用GET属性'about'仅能获得带有'about'参数的域关联关系.
With a GET-property 'latest' you get a list of references that had been computed during the current run-time of YaCy, and with each next call only an update to the next list of references.==使用GET属性'latest'能获得当前的关联关系列表, 并且每一次调用都只能更新下一级关联关系列表.
Click the API icon to see the XML file.==点击API图标查看XML文件.
To see a list of all APIs, please visit the <a href=\"http://www.yacy-websuche.de/wiki/index.php/Dev:API\">API wiki page</a>.==查看所有API, 请访问<a href="http://www.yacy-websuche.de/wiki/index.php/Dev:API">API Wiki</a>.
Web Structure==网页结构
host<==主机<
depth<==深度<
nodes<==节点<
time<==时间<
size<==大小<
>Background<==>背景<
#>Text<==>Text<
>Line<==>线<
>Dot<==>点<
>Dot-end<==>末点<
>Color <==>颜色<
"change"=="改变"
#-----------------------------
#File: Wiki.html
#---------------------------
YaCyWiki page:==YaCyWiki:
last edited by==最后编辑由
change date==改变日期
Edit<==编辑<
only granted to admin==只授权给管理员
Grant Write Access to==授予写权限
# !!! Do not translate the input buttons because that breaks the function to switch rights !!!
#"all"=="Allen"
#"admin"=="Administrator"
Start Page==开始页面
#Index==Index
Versions==版本
Author:==作者:
#Text:==Text:
You can use==您可以在这使用
Wiki Code</a> here.==wiki代码</a>.
"edit"=="编辑"
"Submit"=="提交"
"Preview"=="预览"
"Discard"=="取消"
>Preview==>预览
No changes have been submitted so far!==未提交任何改变!
Subject==主题
Change Date==改变日期
Last Author==最后作者
IO Error reading wiki database:==读取wiki数据库时出现IO错误:
Select versions of page==选择页面版本
Compare version from==原始版本
"Show"=="显示"
with version from==对比版本
"current"=="当前"
"Compare"=="对比"
Return to==返回
Changes will be published as announcement on YaCyNews==改变会被发布在YaCy新闻中.
#-----------------------------
#File: WikiHelp.html
#---------------------------
Wiki Help==Wiki帮助
Wiki-Code==Wiki代码
This table contains a short description of the tags that can be used in the Wiki and several other servlets==此表列出了用于Wiki和几个插件代码标签简述,
of YaCy. For a more detailed description visit the==详情请见
#YaCy Wiki==YaCy Wiki
Description==描述
\=headline===headline
These tags create headlines. If a page has three or more headlines, a table of content will be created automatically.==此标记标识标题内容. 如果页面有多于三个标题, 则会自动创建一个表格.
Headlines of level 1 will be ignored in the table of content.==一级标题.
#text==Text
These tags create stressed texts. The first pair emphasizes the text \(most browsers will display it in italics\),==这些标记标识文本内容. 第一对中为强调内容(多数浏览器用斜体表示),
the second one emphazises it more strongly \(i.e. bold\) and the last tags create a combination of both.==第二对用粗体表示, 第三对为两者的联合.
Text will be displayed <span class=\"strike\">stricken through</span>.==文本内容以<span class="strike">删除线</span>表示.
Lines will be indented. This tag is supposed to mark citations, but may as well be used for styling purposes.==缩进内容, 此标记主要用于引用, 也能用于标识样式.
point==point
These tags create a numbered list.==此标记用于有序列表.
something<==something<
another thing==another thing
and yet another==and yet another
something else==something else
These tags create an unnumbered list.==用于创建无序列表.
word==word
\:definition==:definition
These tags create a definition list.==用于创建定义列表.
This tag creates a horizontal line.==创建水平线.
pagename==pagename
description\]\]==description]]
This tag creates links to other pages of the wiki.==创建到其他wiki页面的链接.
This tag displays an image, it can be aligned left, right or center.==显示图片, 可设置左对齐, 右对齐和居中.
These tags create a table, whereas the first marks the beginning of the table, the second starts==用于创建表格, 第一个标记为表格开头, 第二个为换行,
a new line, the third and fourth each create a new cell in the line. The last displayed tag==第三个与第四个创建列.
closes the table.==最后一个为表格结尾.
#The escape tags will cause all tags in the text between the starting and the closing tag to not be treated as wiki-code.==Durch diesen Tag wird der Text, der zwischen den Klammern steht, nicht interpretiert und unformatiert als normaler Text ausgegeben.
A text between these tags will keep all the spaces and linebreaks in it. Great for ASCII-art and program code.==此标记之间的文本会保留所有空格和换行, 主要用于ASCII艺术图片和编程代码.
If a line starts with a space, it will be displayed in a non-proportional font.==如果一行以空格开头, 则会以非比例形式显示.
url description==URL描述
This tag creates links to external websites.==此标记创建外部网站链接.
alt text==文本备案
#-----------------------------
#File: yacyinteractive.html
#---------------------------
YaCy Interactive Search==YaCy交互搜索
This search result can also be retrieved as RSS/<a href=\"http://www.opensearch.org\">opensearch</a> output.==此搜索结果能以RSS/<a href="http://www.opensearch.org">opensearch</a>形式表示.
The query format is similar to <a href=\"http://www.loc.gov/standards/sru/\">SRU</a>.==请求的格式与<a href="http://www.loc.gov/standards/sru/">SRU</a>相似.
Click the API icon to see an example call to the search rss API.==点击API图标查看示例.
To see a list of all APIs, please visit the <a href="http://www.yacy-websuche.de/wiki/index.php/Dev:API">API wiki page</a>.==查看所有API, 请访问<a href="http://www.yacy-websuche.de/wiki/index.php/Dev:API">API Wiki</a>.
#-----------------------------
#File: yacysearch.html
#---------------------------
Search Page==搜索网页
This search result can also be retrieved as RSS/<a href="http://www.opensearch.org">opensearch</a> output.==此搜索结果能以RSS/<a href="http://www.opensearch.org">opensearch</a>形式表示.
The query format is similar to <a href="http://www.loc.gov/standards/sru/">SRU</a>.==请求的格式<a href="http://www.loc.gov/standards/sru/">SRU</a>相似.
Click the API icon to see an example call to the search rss API.==点击API图标查看示例.
To see a list of all APIs, please visit the <a href="http://www.yacy-websuche.de/wiki/index.php/Dev:API">API wiki page</a>.==查看所有API, 请访问<a href="http://www.yacy-websuche.de/wiki/index.php/Dev:API">API Wiki</a>.
Did you mean:==是否搜索:
"Search"=="搜索"
'Search'=='搜索'
"search again"=="再次搜索"
more options==更多选项
#Text==Text
Images==图片
#Audio==Audio
Video==视频
Applications==程序
The following words are stop-words and had been excluded from the search:==以下关键字是休止符, 已从搜索中排除:
No Results.==未找到.
length of search words must be at least 3 characters==搜索关键字至少为3个字符
> of==> 共
g> local,==g> 本地,
#g> remote),==g> remote),
> from==> 来自
remote YaCy peers.==远端YaCy peer.
#-----------------------------
#File: yacysearchitem.html
#---------------------------
"bookmark"=="书签"
"recommend"=="推荐"
"delete"=="删除"
Pictures==图像
#-----------------------------
#File: yacysearchtrailer.html
#---------------------------
Show search results for "\#\[query\]\#" on map==显示 "#[query]#" 的搜索结果
#>Domain Facet==>Domain Navigator
>Name Space Facet==>命名空间导航
>Author Facet==>作者导航
#-----------------------------
### Subdirectory api ###
#File: api/table_p.html
#---------------------------
Table Viewer==查看表格
#>PK<==>Primärschlüssel<
"Edit Table"=="编辑表格"
#-----------------------------
#File: api/yacydoc.html
#---------------------------
>Author<==>作者<
>Description<==>描述<
>Subject<==>主题<
#>Publisher<==>Veröffentlicher<
#>Contributor<==>Beiträger<
>Date<==>日期<
>Type<==>类型<
>Identifier<==>标识符<
>Language<==>语言<
>Load Date<==>加载日期<
>Referrer Identifier<==>关联标识符<
#>Referrer URL<==>Referrer URL<
>Document size<==>文件大小<
>Number of Words<==>关键字数目<
#-----------------------------
### Subdirectory env/templates ###
#File: env/templates/header.template
#---------------------------
YaCy - Distributed Search Engine==YaCy - 分布式搜索引擎
### SEARCH & BROWSE ###
>Search==>搜索
Web Search==搜索网页
File Search==搜索文件
Search&nbsp;&amp;&nbsp;Browse==搜索&nbsp;&amp;&nbsp;浏览
Search Page==搜索网页
Rich Client Search==客户端搜索
Interactive local Search==本地交互搜索
Compare Search==对比搜索
Ranking Config==排名设置
>Surftips==>建议
Local Peer Wiki==本地Wiki
>Bookmarks==>书签
>Help==>帮助
### INDEX CONTROL ###
Index&nbsp;Production==索引
Index&nbsp;Control==索引&nbsp;控制
Index Creation==索引创建
Crawler Monitor==crawler监视
Crawl Results==crawl结果
Index Administration==索引管理
Filter &amp; Blacklists==过滤 &amp; 黑名单
### SEARCH INTEGRATION ###
Search Integration==搜索集成
Search Portals==搜索主页
Customization==自定义
### MONITORING ###
Monitoring==监视
YaCy Network==YaCy网络
Web Visualization==网页元素外观
Access Tracker==访问跟踪
#Server Log==Server Log
>Messages==>消息
#>Terminal==>Terminal
"New Messages"=="新消息"
### PEER CONTROL
Peer Control==peer控制
Admin Console==管理控制台
>API Action Steering<==>API动作向导<
Confirm Restart==确认重启
Re-Start</a>==重启</a>
Confirm Shutdown==确认关闭
>Shutdown==>关闭
### THE PROJECT ###
The Project==项目
Project Home==项目主页
#Deutsches Forum==Deutsches Forum
English Forum==论坛
YaCy Project Wiki==YaCy项目Wiki
# Development Change Log==Entwicklung Änderungshistorie
amp;language=en==amp;language=cn
Development Change Log==变更日志
Peer Statistics::YaCy Statistics==peer统计数据::YaCy数据
#-----------------------------
#File: env/templates/simpleheader.template
#---------------------------
#Administration<==Administration<
>Web Search<==>网页搜索<
>Search Network<==>搜索网络<
Peer Owner Profile==peer所有者资料
Help / YaCy Wiki==帮助 / YaCy Wiki
#-----------------------------
#File: env/templates/submenuAccessTracker.template
#---------------------------
Access Tracker==访问跟踪
Server Access==服务器访问
Overview==概述
#Details==Details
Connections</a>==连接</a>
Local Search==本地搜索
#Log==Log
#Host Tracker==Host Tracker
Remote&nbsp;Search==远程搜索
#-----------------------------
#File: env/templates/submenuBlacklist.template
#---------------------------
Filter &amp; Blacklists==过滤 &amp; 黑名单
Blacklist Administration==黑名单管理
Blacklist Cleaner==黑名单整理
Blacklist Test==黑名单测试
Import/Export==导入 / 导出
Index Cleaner==索引整理
#-----------------------------
#File: env/templates/submenuConfig.template
#---------------------------
Peer Administration Console==控制台
#Status==状态
Basic Configuration==基本设置
>Accounts==>账户
Network Configuration==网络设置
>Heuristics<==>触发式<
Dictionary Loader==功能扩展
System Update==系统升级
>Performance==>性能
Advanced Settings==高级设置
Parser Configuration==解析配置
Local robots.txt==本地robots.txt
#Web Cache==Web Cache
Advanced Properties==高级设置
#-----------------------------
#File: env/templates/submenuContentIntegration.template
#---------------------------
External Content Integration==外部内容集成
Import phpBB3 forum==导入phpBB3论坛内容
Import Mediawiki dumps==导入Mediawiki数据
Import OAI-PMH Sources==导入OAI-PMH源
#-----------------------------
#File: env/templates/submenuCookie.template
#---------------------------
Cookie Menu==Cookie菜单
Incoming&nbsp;Cookies==进入cookie
Outgoing&nbsp;Cookies==外出cookie
#-----------------------------
#File: env/templates/submenuCrawlMonitor.template
#---------------------------
Processing Monitor==进程监视
Crawler Queues==crawler队列
Loader<==加载器<
Rejected URLs==已拒绝URL
>Queues<==>队列<
Local<==本地<
#Global==Global
#Remote==Remote
Crawler Steering==crawl向导
Scheduler and Profile Editor<==定时器与资料编辑器<
#robots.txt Monitor==robots.txt Monitor
#-----------------------------
#File: env/templates/submenuCustomization.template
#---------------------------
Customization==自定义
>Appearance==>外观
User Profile==用户资料
>Language==>语言
#-----------------------------
#File: env/templates/submenuIndexControl.template
#---------------------------
Index Administration==索引管理
Reverse Word Index Administration==详细关键字索引管理
URL References Database==URL关联关系数据库
URL Viewer==URL浏览
#-----------------------------
#File: env/templates/submenuIndexCreate.template
#---------------------------
#Web Crawler Control==Web Crawler Steuerung
#Start a Web Crawl==Starte einen Web Crawl
#Crawl Start==Crawl starten
#Crawl Profile Editor==Crawl Profil Editor
#Crawler Queues==Crawler Puffer
#Indexing<==Indexierung<
#Loader<==Lader<
#URLs to be processed==zu verarbeitende URLs
#Processing Queues==Warteschlangen
#Local<==Lokal<
#Global<==Global<
#Remote<==Remote<
#Overhang<==Überhang<
#Media Crawl Queues==Medien Crawl-Puffer
#>Images==>Bilder
#>Movies==>Filme
#>Music==>Musik
#--- New menu items ---
Index Creation==索引创建
#Crawler/Spider<==Crawler/Spider<
Full Site Crawl==全站crawl
Sitemap Loader==网站地图加载
Crawl Start<br/>\(Expert\)==开始crawl<br/>(专家模式)
Network<br/>Scanner==网络<br/>扫描仪
#>Intranet<br/>Scanner<==>Intranet<br/>Scanner<
Crawling of==正在crawl
#MediaWikis==MediaWikis
>phpBB3 Forums<==>phpBB3论坛<
Content Import<==导入内容<
Network Harvesting<==网络采集<
#Remote<br/>Crawling==Remote<br/>Crawling
#Scraping<br/>Proxy==Scraping<br/>Proxy
Database Reader<==数据库读取<
for phpBB3 Forums==对于phpBB3论坛
Dump Reader for==Dump阅读器为
#MediaWiki dumps==MediaWiki dumps
#-----------------------------
#File: env/templates/submenuPortalIntegration.template
#---------------------------
Search Portal Integration==搜索门户集成
Live Search Anywhere==任意位置即时搜索
Generic Search Portal==一般搜索门户
Search Box Anywhere==任意位置搜索框
#-----------------------------
#File: env/templates/submenuPublication.template
#---------------------------
Publication==发布
#Wiki==Wiki
#Blog==Blog
File Hosting==文件共享
#-----------------------------
#File: env/templates/submenuViewLog.template
#---------------------------
Server Log Menu==服务器日志菜单
#Server Log==Server Log
#-----------------------------
#File: env/templates/submenuWebStructure.template
#---------------------------
Web Visualization==网页元素外观
Web Structure==网页结构
Image Collage==图像拼贴
#-----------------------------
#File: proxymsg/authfail.inc
#---------------------------
Your Username/Password is wrong.==用户名/密码输入错误.
Username</label>==用户名</label>
Password</label>==密码</label>
"login"=="登录"
#-----------------------------
#File: proxymsg/error.html
#---------------------------
YaCy: Error Message==YaCy: 错误消息
request:==请求:
unspecified error==未定义错误
not-yet-assigned error==未定义错误
You don't have an active internet connection. Please go online.==无网络链接, 请上线.
Could not load resource. The file is not available.==无效文件, 加载资源失败.
Exception occurred==异常发生
Generated \#\[date\]\# by==生成日期 #[date]# 由
#-----------------------------
#File: proxymsg/proxylimits.inc
#---------------------------
Your Account is disabled for surfing.==您的账户没有浏览权限.
Your Timelimit \(\#\[timelimit\]\# Minutes per Day\) is reached.==您的账户时限(#[timelimit]# 分钟每天)已到.
#-----------------------------
#File: proxymsg/unknownHost.inc
#---------------------------
The server==服务器
could not be found.==未找到.
Did you mean:==是不是:
#-----------------------------
#File: www/welcome.html
#---------------------------
YaCy: Default Page for Individual Peer Content==YaCy: 每个peer的默认页面
Individual&nbsp;Web&nbsp;Page==每个网页
Welcome to your own web page<br />in the <strong>YaCy Network==欢迎来到<br /><strong>YaCy网络
THIS IS A DEMONSTRATION PAGE FOR YOUR OWN INDIVIDUAL WEB SERVER!==这是网页服务器演示页面!
PLEASE REPLACE THIS PAGE BY PUTTING A FILE index.html INTO THE PATH==请用index.html替换以下路径中的文件
&lt;YaCy-application-home&gt;<strong>\#\[wwwpath\]\#</strong>==&lt;YaCy程序主页&gt;<strong>#[wwwpath]#</strong>.
#-----------------------------
#File: js/Crawler.js
#---------------------------
"Continue this queue"=="继续队列"
"Pause this queue"=="暂停队列"
#-----------------------------
#File: js/yacyinteractive.js
#---------------------------
>total results==>全部结果
&nbsp;topwords:==&nbsp;顶部:
>Name==>名称
>Size==>大小
>Date==>日期
#-----------------------------
#File: yacy/ui/js/jquery-flexigrid.js
#---------------------------
'Displaying \{from\} to \{to\} of \{total}\ items'=='显示 {from} 到 {to}, 总共 {total} 个条目'
'Processing, please wait ...'=='正在处理, 请稍候...'
'No items'=='无条目'
#-----------------------------
#File: yacy/ui/js/jquery-ui-1.7.2.min.js
#---------------------------
Loading&#8230;==正在加载&#8230;
#-----------------------------
#File: yacy/ui/js/jquery.ui.all.min.js
#---------------------------
Loading&#8230;==正在加载&#8230;
#-----------------------------
#File: yacy/ui/index.html
#---------------------------
About YaCy-UI==关于YaCy-UI
Admin Console==管理控制台
"Bookmarks"=="书签"
>Bookmarks==>书签
#Server Log==Server Log
#-----------------------------
#File: yacy/ui/yacyui-admin.html
#---------------------------
Peer Control==peer控制
"Login"=="登录"
#Login==Anmelden
Themes==主题
Messages==消息
Re-Start==重启
Shutdown==关闭
Web Indexing==网页索引
Crawl Start==开始crawl
Monitoring==监视
YaCy Network==YaCy网络
>Settings==>设置
"Basic Settings"=="基本设置"
\tBasic==基本
Accounts==账户
"Network"=="网络"
\tNetwork==网络
"Advanced Settings"=="高级设置"
\tAdvanced==高级
"Update Settings"=="升级设置"
\tUpdate==升级
>YaCy Project==>YaCy项目
"YaCy Project Home"=="YaCy项目主页"
\tProject==项目
"YaCy Statistics"=="YaCy数据"
\tStatistics==数据
"YaCy Forum"=="YaCy论坛"
#Forum==Forum
"Help"=="帮助"
#"YaCy Wiki"=="YaCy Wiki"
#Wiki==Wiki
#-----------------------------
#File: yacy/ui/yacyui-bookmarks.html
#---------------------------
'Add'=='添加'
'Crawl'=='crawl'
'Edit'=='编辑'
'Delete'=='删除'
'Rename'=='重命名'
'Help'=='帮助'
#"public bookmark"=="öffentliches Lesezeichen"
#"private bookmark"=="privates Lesezeichen"
#"delete bookmark"=="Lesezeichen löschen"
"YaCy Bookmarks"=="YaCy书签"
#>Title==>Titel
#>Tags==>Tags
#>Date==>Datum
#'Hash'=='Hash'
'Public'=='公有'
'Title'=='题目'
#'Tags'=='Tags'
'Folders'=='目录'
'Date'=='日期'
#-----------------------------
#File: yacy/ui/sidebar/sidebar_1.html
#---------------------------
YaCy P2P Websearch==YaCy P2P搜索
"Search"=="搜索"
>Text==>文本
>Images==>图像
>Audio==>音频
>Video==>视频
>Applications==>程序
Search term:==搜索条目:
"help"=="帮助"
Resource/Network:==资源/网络:
freeworld==自由世界
local peer==本地peer
>bookmarks==>书签
sciencenet==ScienceNet
>Language:==>语言:
any language==任意语言
Bookmark Folders==书签目录
#-----------------------------
#File: yacy/ui/sidebar/sidebar_2.html
#---------------------------
Bookmark Tags<==标签<
Search Options==搜索设置
Constraint:==约束:
all pages==所有页面
index pages==索引页面
URL mask:==URL过滤:
Prefer mask:==首选过滤:
Bookmark TagCloud==标签云
Topwords<==顶部<
alt="help"==alt="帮助"
title="help"==title="帮助"
#-----------------------------
#File: yacy/ui/yacyui-welcome.html
#---------------------------
>Overview==>概述
YaCy-UI is going to be a JavaScript based client for YaCy based on the existing XML and JSON API.==YaCy-UI 是基于JavaScript的YaCy客户端, 它使用当前的XML和JSON API.
YaCy-UI is at most alpha status, as there is still problems with retriving the search results.==YaCy-UI 尚在测试阶段, 所以在搜索时会有部分问题出现.
I am currently changing the backend to a more application friendly format and getting good results with it \(I will check that in some time after the stable release 0.7\).==目前我正在修改程序后台, 以让其更加友善和搜索到更合适的结果(我会在稳定版0.7后改善此类问题).
For now have a look at the bookmarks, performance has increased significantly, due to the use of JSON and Flexigrid!==就目前来说, 由于使用JSON和Flexigrid, 性能已获得显著提升!
#-----------------------------
# EOF