# zh.lng # English-->Chinese # ----------------------- # This is a part of YaCy, a peer-to-peer based web search engine # # (C) by Michael Peter Christen; mc@anomic.de # first published on http://www.anomic.de # Frankfurt, Germany, 2005 # # # This file is maintained by lofyer # This file is written by lofyer # If you find any mistakes or untranslated strings in this file please don't hesitate to email them to the maintainer. #File: AccessGrid_p.html #--------------------------- YaCy Network Access==YaCy网络访问 Server Access Grid==服务器访问网格 This images shows incoming connections to your YaCy peer and outgoing connections from your peer to other peers and web servers==这幅图显示了到您节点的传入连接,以及从您节点到其他节点或网站服务器的传出连接 #----------------------------- #File: AccessTracker_p.html #--------------------------- Access Tracker==访问跟踪器 Server Access Overview==网站访问概况 This is a list of #[num]# requests to the local http server within the last hour.==最近一小时内有 #[num]# 个到本地的访问请求。 This is a list of requests to the local http server within the last hour==此列表显示最近一小时内到本机的访问请求 Showing #[num]# requests==显示 #[num]# 个请求 >Host<==>主机< >Path<==>路径< Date<==日期< Access Count During==访问时间 last Second==最近1秒 last Minute==最近1分 last 10 Minutes==最近10分 last Hour==最近1小时 The following hosts are registered as source for brute-force requests to protected pages==以下主机作为保护页面强制请求的源 #>Host==>Host Access Times==访问时间 Server Access Details==服务器访问细节 Local Search Log==本地搜索日志 Local Search Host Tracker==本地搜索主机跟踪器 Remote Search Log==远端搜索日志 #Total:==Total: Success:==成功: Remote Search Host Tracker==远端搜索主机跟踪器 This is a list of searches that had been requested from this' peer search interface==此列表显示来自本节点搜索界面发出请求的搜索 Showing #[num]# entries from a total of #[total]# requests.==显示 #[num]# 条目,共 #[total]# 个请求. Requesting Host==请求主机 Peer Name==节点名称 Offset==偏移量 Expected Results==期望结果 Returned Results==返回结果 Known Results==已知结果 Used Time (ms)==消耗时间(毫秒) URL fetch (ms)==获取地址(毫秒) Snippet comp (ms)==片段比较(毫秒) Query==查询字符 >User Agent<==>用户代理< Top Search Words (last 7 Days)==热门搜索词汇(最近7天) Search Word Hashes==搜索字哈希值 Count==计数 Queries Per Last Hour==查询/小时 Access Dates==访问日期 This is a list of searches that had been requested from remote peer search interface==此列表显示来自远端节点搜索界面发出请求的搜索. This is a list of requests (max. 1000) to the local http server within the last hour==这是最近一小时内本地http服务器的请求列表(最多1000个) #----------------------------- #File: Settings_UrlProxyAccess.inc #--------------------------- URL Proxy Settings<=URL Proxy Settings< With this settings you can activate or deactivate URL proxy.==With this settings you can activate or deactivate URL proxy. Service call: ==Service call: , where parameter is the url of an external web page.==, where parameter is the url of an external web page. >URL proxy:<==>URL proxy:< >Enabled<==>开启< Globally enables or disables URL proxy via ==Globally enables or disables URL proxy via Show search results via URL proxy:==Show search results via URL proxy: Enables or disables URL proxy for all search results. If enabled, all search results will be tunneled through URL proxy.==Enables or disables URL proxy for all search results. If enabled, all search results will be tunneled through URL proxy. Alternatively you may add this javascript to your browser favorites/short-cuts, which will reload the current browser address==Alternatively you may add this javascript to your browser favorites/short-cuts, which will reload the current browser address via the YaCy proxy servlet.==via the YaCy proxy servlet. or right-click this link and add to favorites:==or right-click this link and add to favorites: Restrict URL proxy use:==Restrict URL proxy use: Define client filter. Default: ==Define client filter. Default: URL substitution:==URL substitution: Define URL substitution rules which allow navigating in proxy environment. Possible values: all, domainlist. Default: domainlist.==Define URL substitution rules which allow navigating in proxy environment. Possible values: all, domainlist. Default: domainlist. "Submit"=="Submit" #----------------------------- #File: Autocrawl_p.html #--------------------------- >Autocrawler<==>自动爬虫< Autocrawler automatically selects and adds tasks to the local crawl queue==自动爬虫自动选择任务并将其添加到本地爬网队列 This will work best when there are already quite a few domains in the index==如果索引中已经有一些域名,这将会工作得最好 Autocralwer Configuration==自动爬虫配置 You need to restart for some settings to be applied==您需要重新启动才能应用一些设置 Enable Autocrawler:==启用自动爬虫: Deep crawl every:==深入爬取: Warning: if this is bigger than "Rows to fetch" only shallow crawls will run==警告:如果这大于“取回行”,只有浅爬取将运行 Rows to fetch at once:==一次取回行: Recrawl only older than # days:==重新爬取只有 # 天以前的时间: Get hosts by query:==通过查询获取主机: Can be any valid Solr query.==可以是任何有效的Solr查询。 Shallow crawl depth (0 to 2):==浅爬取深度(0至2): Deep crawl depth (1 to 5):==深度爬取深度(1至5): Index text:==索引文本: Index media:==索引媒体: "Save"=="保存" #----------------------------- #File: BlacklistCleaner_p.html #--------------------------- Blacklist Cleaner==黑名单整理 Here you can remove or edit illegal or double blacklist-entries==在这里您可以删除或者编辑一个非法或者重复的黑名单条目 Check list==校验名单 "Check"=="校验" Allow regular expressions in host part of blacklist entries==允许黑名单中主机部分的正则表达式 The blacklist-cleaner only works for the following blacklist-engines up to now:==此整理目前只对以下黑名单引擎有效: Illegal Entries in #[blList]# for==非法条目在 #[blList]# Deleted #[delCount]# entries==已删除 #[delCount]# 个条目 Altered #[alterCount]# entries==已修改 #[alterCount]# 个条目 Two wildcards in host-part==主机部分中的两个通配符 Either subdomain or wildcard==子域名或者通配符 Path is invalid Regex==无效正则表达式 Wildcard not on begin or end==通配符未在开头或者结尾处 Host contains illegal chars==主机名包含非法字符 Double==重复 "Change Selected"=="改变选中" "Delete Selected"=="删除选中" No Blacklist selected==未选中黑名单 #----------------------------- #File: BlacklistImpExp_p.html #--------------------------- Blacklist Import==黑名单导入 Used Blacklist engine:==使用的黑名单引擎: Import blacklist items from...==导入黑名单条目从... other YaCy peers:==其他的YaCy 节点s: "Load new blacklist items"=="载入黑名单条目" #URL:==URL: plain text file:<==文本文件:< XML file:==XML文件: Upload a regular text file which contains one blacklist entry per line.==上传一个每行都有一个黑名单条目的文本文件. Upload an XML file which contains one or more blacklists.==上传一个包含一个或多个黑名单的XML文件. Export blacklist items to==导出黑名单到 Here you can export a blacklist as an XML file. This file will contain additional==您可以导出黑名单到一个XML文件中,此文件含有 information about which cases a blacklist is activated for==激活黑名单所具备条件的详细信息 "Export list as XML"=="导出名单到XML" Here you can export a blacklist as a regular text file with one blacklist entry per line==您可以导出黑名单到一个文本文件中,且每行都仅有一个黑名单条目 This file will not contain any additional information==此文件不会包含详细信息 "Export list as text"=="导出名单到文本" #----------------------------- #File: BlacklistTest_p.html #--------------------------- Blacklist Test==黑名单测试 Used Blacklist engine:==使用的黑名单引擎: Test list:==测试黑名单: "Test"=="测试" The tested URL was==此链接 It is blocked for the following cases:==在下列情况下,它会被阻止: Crawling==爬取中 #DHT==DHT News==新闻 Proxy==代理 Search==搜索 Surftips==建议 #----------------------------- #File: Blacklist_p.html #--------------------------- Blacklist Administration==黑名单管理 This function provides an URL filter to the proxy; any blacklisted URL is blocked==提供代理地址过滤;过滤掉自载入时加入进黑名单的地址. from being loaded. You can define several blacklists and activate them separately.==您可以自定义黑名单并分别激活它们. You may also provide your blacklist to other peers by sharing them; in return you may==您也可以提供你自己的黑名单列表给其他人; collect blacklist entries from other peers==同样,其他人也能将黑名单列表共享给您 Select list to edit:==选择列表进行编辑: Add URL pattern==添加地址规则 Edit list==编辑列表 The right '*', after the '/', can be replaced by a==在'/'之后的右边'*'可以被替换为 >regular expression<==>正则表达式< #(slow)==(慢) "set"=="集合" The right '*'==右边的'*' Used Blacklist engine:==使用的黑名单引擎: Active list:==激活列表: No blacklist selected==未选中黑名单 Select list:==选中黑名单: not shared::shared==未共享::已共享 "select"=="选择" Create new list:==创建: "create"=="创建" Settings for this list==设置 "Save"=="保存" Share/don't share this list==共享/不共享此名单 Delete this list==删除 Edit this list==编辑 These are the domain name/path patterns in==这些域名/路径规则来自 Blacklist Pattern==黑名单规则 Edit selected pattern(s)==编辑选中规则 Delete selected pattern(s)==删除选中规则 Move selected pattern(s) to==移动选中规则 #You can select them here for deletion==您可以从这里选择要删除的项 Add new pattern:==添加新规则: "Add URL pattern"=="添加地址规则" The right '*', after the '/', can be replaced by a regular expression.== 在 '/' 后边的 '*' ,可用正则表达式表示. #domain.net/fullpath<==domain.net/绝对路径< #>domain.net/*<==>domain.net/*< #*.domain.net/*<==*.domain.net/*< #*.sub.domain.net/*<==*.sub.domain.net/*< #sub.domain.*/*<==sub.domain.*/*< #domain.*/*<==domain.*/*< #was removed from blacklist==wurde aus Blacklist entfernt #was added to the blacklist==wurde zur Blacklist hinzugefügt Activate this list for==为以下条目激活此名单 Show entries:==显示条目: Entries per page:==页面条目: Edit existing pattern(s):==编辑现有规则: "Save URL pattern(s)"=="保存地址规则" #----------------------------- #File: Blog.html #--------------------------- by==通过 Comments==评论 >edit==>编辑 >delete==>删除 Edit<==编辑< previous entries==前一个条目 next entries==下一个条目 new entry==新条目 import XML-File==导入XML文件 export as XML==导出到XML文件 Comments==评论 Blog-Home==博客主页 Author:==作者: Subject:==标题: Text:==文本: You can use==您可以用 Yacy-Wiki Code==YaCy-百科代码 here.==这儿. Comments:==评论: deactivated==无效 >activated==>有效 moderated==改变 "Submit"=="提交" "Preview"=="预览" "Discard"=="取消" >Preview==>预览 No changes have been submitted so far==未作出任何改变 Access denied==拒绝访问 To edit or create blog-entries you need to be logged in as Admin or User who has Blog rights.==如果编辑或者创建博客内容,您需要登录. Are you sure==确定 that you want to delete==要删除: Confirm deletion==确定删除 Yes, delete it.==是, 删除. No, leave it.==不, 保留. Import was successful!==导入成功! Import failed, maybe the supplied file was no valid blog-backup?==导入失败, 可能提供的文件不是有效的博客备份? Please select the XML-file you want to import:==请选择您想导入的XML文件: #----------------------------- #File: BlogComments.html #--------------------------- by==通过 Comments==评论 Login==登录 Blog-Home==博客主页 delete==删除 allow==允许 Author:==作者: Subject:==标题: #Text:==Text: You can use==您可以用 Yacy-Wiki Code==YaCy-百科代码 here.==在这里. "Submit"=="提交" "Preview"=="预览" "Discard"=="取消" #----------------------------- #File: Bookmarks.html #--------------------------- start autosearch of new bookmarks==开始自动搜索新书签 This starts a search of new or modified bookmarks since startup==开始搜索自从启动以来新的或修改的书签 Every peer online will be ask for results.==每个在线的节点都会被索要结果。 To see a list of all APIs, please visit the API wiki page.==要查看所有API的列表,请访问API wiki page。 To see a list of all APIs==要查看所有API的列表,请访问API wiki page。 YaCy '#[clientname]#': Bookmarks==YaCy '#[clientname]#': 书签 The bookmarks list can also be retrieved as RSS feed. This can also be done when you select a specific tag.==书签列表也能用作RSS订阅.当您选择某个标签时您也可执行这个操作. Click the API icon to load the RSS from the current selection.==点击API图标以从当前选择书签中载入RSS. To see a list of all APIs, please visit the API wiki page.==获取所有API, 请访问API Wiki.

Bookmarks==

书签 Bookmarks (==书签( Login==登录 List Bookmarks==显示书签 Add Bookmark==添加书签 Import Bookmarks==导入书签 Import XML Bookmarks==导入XML书签 Import HTML Bookmarks==导入HTML书签 "import"=="导入" Default Tags:==默认标签 imported==已导入 Edit Bookmark==编辑书签 #URL:==URL: Title:==标题: Description:==描述: Folder (/folder/subfolder):==目录(/目录/子目录): Tags (comma separated):==标签(以逗号隔开): >Public:==>公共的: yes==是 no==否 Bookmark is a newsfeed==书签是新闻订阅点 "create"=="创建" "edit"=="编辑" File:==文件: import as Public==导入为公有 "private bookmark"=="私有书签" "public bookmark"=="公共书签" Tagged with==关键词: 'Confirm deletion'=='确认删除' Edit==编辑 Delete==删除 Folders==目录 Bookmark Folder==书签目录 Tags==标签 Bookmark List==书签列表 previous page==前一页 next page==后一页 All==所有 Show==显示 Bookmarks per page==书签/每页 #unsorted==默认排序 #----------------------------- #File: Collage.html #--------------------------- Image Collage==图像拼贴 Private Queue==私有 Public Queue==公共 #----------------------------- #File: ConfigAccounts_p.html #--------------------------- Username too short. Username must be >= 4 Characters.==用户名太短。 用户名必须>= 4 个字符. Username already used (not allowed).==用户名已被使用(不允许). Username too short. Username must be ==用户名太短. 用户名必须 User Administration==用户管理 User created:==用户已创建: User changed:==用户已改变: Generic error==一般错误 Passwords do not match==密码不匹配 Username too short. Username must be >= 4 Characters==用户名太短, 至少为4个字符 No password is set for the administration account==管理员账户未设置密码 Please define a password for the admin account==请设置一个管理员密码 #Admin Account Admin Account==管理员 Access from localhost without account==本地匿名访问 Access to your peer from your own computer (localhost access) is granted with administrator rights. No need to configure an administration account.==通过管理员权限授予从您自己的计算机访问您的节点(localhost访问权限).无需配置管理帐户. This setting is convenient but less secure than using a qualified admin account.==此设置很方便,但比使用合格的管理员帐户安全性低. Please use with care, notably when you browse untrusted and potentially malicious websites while running your YaCy peer on the same computer.==请谨慎使用,尤其是在计算机上运行YaCy节点并浏览不受信任和可能有恶意的网站时. Access only with qualified account==只允许授权用户访问 This is required if you want a remote access to your peer, but it also hardens access controls on administration operations of your peer.==如果您希望远端访问你的节点,则这是必需的,但它也会加强节点管理操作的访问控制. Peer User:==节点用户: New Peer Password:==新节点密码: Repeat Peer Password:==重复节点密码: "Define Administrator"=="设置管理员账户" #Access Rules >Access Rules<==>访问规则< Protection of all pages: if set to on==保护所有页面:如果设置为开启 access to all pages need authorization==访问所有页面需要授权 if off, only pages with "_p" extension are protected==如果关闭,只有扩展名为“_p”的页面才受保护 Set Access Rules==设置访问规则 #User Accounts User Accounts==用户账户 Select user==选择用户 New user==新用户 or goto user==或者去用户 >account list<==>账号列表< Edit User==编辑用户 Delete User==删除用户 Edit current user:==编辑当前用户: Username==用户名 Password==密码 Repeat password==重复密码 First name==名 Last name==姓 Address==地址 Rights==权限 == Timelimit==时限 Time used==已用时 Save User==保存用户 #----------------------------- #File: ConfigAppearance_p.html #--------------------------- Appearance and Integration==外观界面 You can change the appearance of the YaCy interface with skins.==您可以在这里修改YaCy的外观界面. #You can change the appearance of YaCy with skins==Sie können hier das Erscheinungsbild von YaCy mit Skins ändern The selected skin and language also affects the appearance of the search page.==选择的皮肤和语言也会影响到搜索页面的外观. If you create a search portal with YaCy then you can==如果您创建YaCy门户, change the appearance of the search page here.==那么您能在这里 改变搜索页面的外观. #and the default icons and links on the search page can be replaced with you own.==und die standard Grafiken und Links auf der Suchseite durch Ihre eigenen ersetzen. Skin Selection==选择皮肤 Select one of the default skins, download new skins, or create your own skin.==选择一个默认皮肤, 下载新皮肤或者创建属于您自己的皮肤. Current skin==当前皮肤 Available Skins==可用皮肤 "Use"=="使用" "Delete"=="删除" >Skin Color Definition<==>改变皮肤颜色< The generic skin 'generic_pd' can be configured here with custom colors:==能在这里修改皮肤'generic_pd'的颜色: >Background<==>背景< >Text<==>文本< >Legend<==>说明< >Table Header<==>标签 头部< >Table Item<==>标签 条目 1< >Table Item 2<==>标签 条目 2< >Table Bottom<==>标签 底部< >Border Line<==>边界 线< >Sign 'bad'<==>符号 '坏'< >Sign 'good'<==>符号 '好'< >Sign 'other'<==>符号 '其他'< >Search Headline<==>搜索 标题< >Search URL==>搜索 地址 hover==悬浮 "Set Colors"=="设置颜色" >Skin Download<==>下载皮肤< Skins can be installed from download locations==安装下载皮肤 Install new skin from URL==从URL安装皮肤 Use this skin==使用这个皮肤 "Install"=="安装" Make sure that you only download data from trustworthy sources. The new Skin file==确保您的皮肤文件是从可靠源获得. 如果存在相同文件 might overwrite existing data if a file of the same name exists already.==, 新皮肤会覆盖旧的. >Unable to get URL:==>无法打开链接: Error saving the skin.==保存皮肤时出错. #----------------------------- #File: ConfigBasic.html #--------------------------- Your port has changed. Please wait 10 seconds.==您的端口已更改. 请等待10秒. Your browser will be redirected to the new location in 5 seconds.==您的浏览器将在5秒内重定向到新的位置. The peer port was changed successfully.==对端口已经成功修改 Opening a router port is not a YaCy-specific task;==打开一个路由器端口不是一个YaCy特定的任务; However: if you fail to open a router port, you can nevertheless use YaCy with full functionality, the only function that is missing is on the side of the other YaCy users because they cannot see your peer.==但是,如果您无法打开路由器端口,则仍然可以使用YaCy的全部功能,缺少的唯一功能是在其他YaCy用户侧,因为他们无法看到您的YaCy节点. Set by system property==由系统属性设置 https enabled==https启用 Configure your router for YaCy using UPnP:==使用UPnP为您的路由器配置YaCy: on port==在端口 you can see instruction videos everywhere in the internet, just search for Open Ports on a <our-router-type> Router and add your router type as search term.==您可以在互联网上的任何地方查看说明视频,只需搜索Open Ports on a <our-router-type> Router并添加您的路由器类型作为搜索词。 However: if you fail to open a router port==但是,如果您无法打开路由器端口,则仍然可以使用YaCy的全部功能,缺少的唯一功能是在其他YaCy用户侧,因为他们无法看到您的YaCy节点。 you can see instruction videos everywhere in the internet==您可以在互联网上的任何地方查看说明视频,只需搜索Open Ports on a Access Configuration==访问设置 Basic Configuration==基本设置 Your YaCy Peer needs some basic information to operate properly==您的YaCy 节点需要一些基本信息才能工作 Select a language for the interface==选择界面语言 汉语/漢語==中文 Use Case: what do you want to do with YaCy:==用途: 您用YaCy做什么: Community-based web search==基于社区的网络搜索 Join and support the global network 'freeworld', search the web with an uncensored user-owned search network==加入并支持全球网络 'freeworld', 自由地搜索. Search portal for your own web pages==属于您自己的搜索引擎 Your YaCy installation behaves independently from other peers and you define your own web index by starting your own web crawl. This can be used to search your own web pages or to define a topic-oriented search portal.==本机YaCy的节点创建与索引过程独立于其他节点, 即您可以定义自己的搜索偏向. Files may also be shared with the YaCy server, assign a path here:==您也能与YaCy服务器共享内容, 在这里指定路径: This path can be accessed at ==可以通过以下链接访问 Use that path as crawl start point.==将此路径作为索引起点. Intranet Indexing==局域网索引 Create a search portal for your intranet or web pages or your (shared) file system.==创建您自己的局域网, 网页或者您共享的文件系统. URLs may be used with http/https/ftp and a local domain name or IP, or with an URL of the form==适合http/https/ftp协议的链接/主机名/IP or smb:==或者服务器信息块(SMB): Your peer name has not been customized; please set your own peer name==您的节点尚未命名, 请命名它 You may change your peer name==您可以改变您的节点名称 Peer Name:==节点名称: Your peer cannot be reached from outside==外部将不能访问您的节点 which is not fatal, but would be good for the YaCy network==此举有利于YaCy网络 please open your firewall for this port and/or set a virtual server option in your router to allow connections on this port==请改变您的防火墙或者虚拟机路由设置, 从而外网能访问这个端口 Your peer can be reached by other peers==外部将能访问您的节点 Peer Port:==节点端口: Configure your router for YaCy:==设置本机路由: Configuration was not successful. This may take a moment.==配置失败. 这需要花费一些时间. Set Configuration==保存设置 What you should do next:==下一步您该做的: Your basic configuration is complete! You can now (for example)==配置成功, 您现在可以 just <==开始< start an uncensored search==自由地搜索了 start your own crawl and contribute to the global index, or create your own private web index==开始您的索引, 并将其贡献给全球索引, 或者创建一个您自己的私有搜索网页 set a personal peer profile (optional settings)==设置个人节点资料 (可选项) monitor at the network page what the other peers are doing==监视网络页面, 以及其他节点的活动 Your Peer name is a default name; please set an individual peer name.==您的节点名称为系统默认,请另外设置一个名称. You did not set a user name and/or a password.==您未设置用户名和/或密码. Some pages are protected by passwords.==一些页面受密码保护. You should set a password at the Accounts Menu to secure your YaCy peer.

::==您可以在 账户菜单 设置密码, 从而加强您的YaCy节点安全性.

:: You did not open a port in your firewall or your router does not forward the server port to your peer.==您未打开防火墙端口或者您的路由器未能与主机的服务端口建立链接. This is needed if you want to fully participate in the YaCy network.==如果您要完全加入YaCy网络, 此项是必须的. You can also use your peer without opening it, but this is not recomended.==不开放您的节点您也能使用, 但是不推荐. #----------------------------- #File: ConfigHTCache_p.html #--------------------------- Hypertext Cache Configuration==超文本缓存配置 The HTCache stores content retrieved by the HTTP and FTP protocol. Documents from smb:// and file:// locations are not cached.==超文本缓存存储着从HTTP和FTP协议获得的内容. 其中从smb:// 和 file:// 取得的内容不会被缓存. The cache is a rotating cache: if it is full, then the oldest entries are deleted and new one can fill the space.==此缓存是队列式的: 队列满时, 会删除旧内容, 从而加入新内容. #HTCache Configuration HTCache Configuration==超文本缓存配置 Cache hits==缓存命中率 The path where the cache is stored==缓存存储路径 The current size of the cache==当前缓存容量 >#[actualCacheSize]# MB for #[actualCacheDocCount]# files, #[docSizeAverage]# KB / file in average==>#[actualCacheSize]#MB为#[actualCacheDocCount]#文件, #[docSizeAverage]#平均KB /文件 The maximum size of the cache==缓存最大容量 Compression level==压缩级别 Concurrent access timeout==并行存取超时 milliseconds==毫秒 "Set"=="设置" #Cleanup Cleanup==清除 Cache Deletion==删除缓存 Delete HTTP & FTP Cache==删除HTTP & FTP 缓存 Delete robots.txt Cache==删除爬虫协议缓存 "Delete"=="删除" #----------------------------- #File: ConfigHeuristics_p.html #--------------------------- Heuristics Configuration==启发式配置 A heuristic is an 'experience-based technique that help in problem solving, learning and discovery' (wikipedia).==启发式是一种“基于经验的技术,有助于解决问题,学习和发现” search-result: shallow crawl on all displayed search results==搜索结果:浅度爬取所有显示的搜索结果 When a search is made then all displayed result links are crawled with a depth-1 crawl.==当进行搜索时,所有显示的结果网址的爬网深度-1。 "Save"=="保存" "add"=="添加" >new<==>新建< >delete<==>删除< >Comment<==>评论< >Title<==>标题< >Active<==>激活< >copy & paste a example config file<==>复制& 粘贴一个示例配置文件< Alternatively you may==或者你可以 To find out more about OpenSearch see==要了解关于OpenSearch的更多信息,请参阅 20 results are taken from remote system and loaded simultanously, parsed and indexed immediately.==20个结果从远端系统中获取并同时加载,立即解析并创建索引. When using this heuristic, then every new search request line is used for a call to listed opensearch systems.==使用这种启发式时,每个新的搜索请求行都用于调用列出的opensearch系统。 This means: right after the search request every page is loaded and every page that is linked on this page.==这意味着:在搜索请求之后,就开始加载每个页面上的链接。 If you check 'add as global crawl job' the pages to be crawled are added to the global crawl queue (remote peers can pickup pages to be crawled).==如果选中“添加为全局爬取作业”,则要爬网的页面将被添加到全局爬取队列(远端YaCy节点可以爬取要爬取的页面)。 Default is to add the links to the local crawl queue (your peer crawls the linked pages).==默认是将链接添加到本地爬网队列(您的YaCy爬取链接的页面)。 add as global crawl job==添加为全球爬取作业 opensearch load external search result list from active systems below==opensearch从下面的活动系统加载外部搜索结果列表 Available/Active Opensearch System==可用/激活Opensearch系统 Url (format opensearch==Url (格式为opensearch Url template syntax==网址模板语法 "reset to default list"=="重置为默认列表" "discover from index"=="从索引中发现" start background task, depending on index size this may run a long time==开始后台任务,这取决于索引的大小,这可能会运行很长一段时间 With the button "discover from index" you can search within the metadata of your local index (Web Structure Index) to find systems which support the Opensearch specification.==使用“从索引发现”按钮,您可以在本地索引(Web结构索引)的元数据中搜索,以查找支持Opensearch规范的系统。 The task is started in the background. It may take some minutes before new entries appear (after refreshing the page).==任务在后台启动。 出现新条目可能需要几分钟时间(在刷新页面之后)。 "switch Solr fields on"=="开关Solr字段" ('modify Solr Schema')==('修改Solr模式') located in defaults/heuristicopensearch.conf to the DATA/SETTINGS directory.==位于DATA / SETTINGS目录的 defaults / heuristicopensearch.conf 中。 For the discover function the web graph option of the web structure index and the fields target_rel_s, target_protocol_s, target_urlstub_s have to be switched on in the webgraph Solr schema.==对于发现功能,Web结构索引的 web图表选项和字段 target_rel_s,target_protocol_s,target_urlstub_s 必须在webgraph Solr模式。 20 results are taken from remote system and loaded simultanously==20个结果从远端系统中获取,并同时加载,立即解析并索引 >copy ==>复制&amp; 粘贴一个示例配置文件< When using this heuristic==使用这种启发式时,每个新的搜索请求行都用于调用列出的opensearch系统。 For the discover function the web graph option of the web structure index and the fields target_rel_s==对于发现功能,Web结构索引的 web图表选项和字段 target_rel_s,target_protocol_s,target_urlstub_s 必须在 webgraph Solr模式。 start background task==开始后台任务,这取决于索引的大小,这可能会运行很长一段时间 >copy==>复制&amp; 粘贴一个示例配置文件< The search heuristics that can be switched on here are techniques that help the discovery of possible search results based on link guessing, in-search crawling and requests to other search engines.==您可以在这里开启启发式搜索, 通过猜测链接, 嵌套搜索和访问其他搜索引擎, 从而找到更多符合您期望的结果. When a search heuristic is used, the resulting links are not used directly as search result but the loaded pages are indexed and stored like other content.==开启启发式搜索时, 搜索结果给出的链接并不是直接搜索的链接, 而是已经缓存在其他服务器上的结果. This ensures that blacklists can be used and that the searched word actually appears on the page that was discovered by the heuristic.==这保证了黑名单的有效性, 并且搜索关键字是通过启发式搜索找到的. The success of heuristics are marked with an image==启发式搜索找到的结果会被特定图标标记 heuristic:<name>==启发式:<名称> #(redundant)==(redundant) (new link)==(新链接) below the favicon left from the search result entry:==搜索结果中使用的图标: The search result was discovered by a heuristic, but the link was already known by YaCy==搜索结果通过启发式搜索, 且链接已知 The search result was discovered by a heuristic, not previously known by YaCy==搜索结果通过启发式搜索, 且链接未知 'site'-operator: instant shallow crawl=='站点'-操作符: 即时浅爬取 When a search is made using a 'site'-operator (like: 'download site:yacy.net') then the host of the site-operator is instantly crawled with a host-restricted depth-1 crawl.==当使用'站点'-操作符搜索时(比如: 'download site:yacy.net') ,主机就会立即爬取层数为 最大限制深度-1 的内容. That means: right after the search request the portal page of the host is loaded and every page that is linked on this page that points to a page on the same host.==意即: 在链接请求发出后, 搜索引擎就会载入在同一主机中每一个与此页面相连的网页. Because this 'instant crawl' must obey the robots.txt and a minimum access time for two consecutive pages, this heuristic is rather slow, but may discover all wanted search results using a second search (after a small pause of some seconds).==因为'立即爬取'依赖于爬虫协议和两个相连页面的最小访问时间, 所以这个启发式选项会相当慢, 但是在第二次搜索时会搜索到更多条目(需要间隔几秒钟). #----------------------------- #File: ConfigLanguage_p.html #--------------------------- Simple Editor==简单编辑器 Download Language File==下载语言文件 to add untranslated text==用于添加仍未翻译文本 Supported formats are the internal language file (extension .lng) or XLIFF (extension .xlf) format.==支持的格式是内部语言文件(扩展名.lng)或XLIFF(扩展名.xlf)格式. Language selection==语言选择 You can change the language of the YaCy-webinterface with translation files.==您可以使用翻译文件来改变YaCy操作界面的语言. Current language==当前语言 Author(s) (chronological)==作者(按时间排序) Send additions to maintainer==向维护者提交补丁 Available Languages==可用语言 Install new language from URL==从URL安装新语言 Use this language==使用此语言 "Use"=="使用" "Delete"=="删除" "Install"=="安装" Unable to get URL:==打开链接失败: Error saving the language file.==保存语言文件时发生错误. Make sure that you only download data from trustworthy sources. The new language file==确保您的数据是从可靠源下载. 如果存在相同文件名 might overwrite existing data if a file of the same name exists already.==, 旧文件将被覆盖. #----------------------------- #File: ConfigNetwork_p.html #--------------------------- == Network Configuration==网络设置 #Network and Domain Specification Network and Domain Specification==确定网络和域 YaCy can operate a computing grid of YaCy peers or as a stand-alone node.==您可以操作由YaCy节点组成的计算网格或者一个单独节点. To control that all participants within a web indexing domain have access to the same domain,==进行索引的域需要具有访问权限才能控制相同域内的所有节点, this network definition must be equal to all members of the same YaCy network.==且此设置对同一YaCy网络内的所有节点有效. >Network Definition<==>网络定义< Enter custom URL...==输入自定义网址... Remote Network Definition URL==远端网络定义地址 Network Nick==网络别名 Long Description==描述 Indexing Domain==索引域 #DHT==DHT "Change Network"=="改变网络" #Distributed Computing Network for Domain Distributed Computing Network for Domain==域内分布式计算网络. Enable Peer-to-Peer Mode to participate in the global YaCy network==开启点对点模式从而加入全球YaCy网 or if you want your own separate search cluster with or without connection to the global network==或者不论加不加入全球YaCy网,你都可以打造个人搜索群 Enable 'Robinson Mode' for a completely independent search engine instance==开启漂流模式获得完全独立的搜索引擎 without any data exchange between your peer and other peers==本节点不会与其他节点有任何数据交换 #Peer-to-Peer Mode Peer-to-Peer Mode==点对点模式 >Index Distribution==>索引分发 This enables automated, DHT-ruled Index Transmission to other peers==自动向其他节点传递DHT规则的索引 >enabled==>开启 disabled during crawling==关闭 在爬取时 disabled during indexing==关闭 在索引时 >Index Receive==>接收索引 Accept remote Index Transmissions==接受远端索引传递 This works only if you have a senior peer. The DHT-rules do not work without this function==仅当您是高级节点时有效. 如果未设置此项, DHT规则不会工作 >reject==>拒绝 accept transmitted URLs that match your blacklist==接受 与您黑名单匹配的传来的地址 >allow==>允许 deny remote search==拒绝 远端搜索 #Robinson Mode >Robinson Mode==>漂流模式 If your peer runs in 'Robinson Mode' you run YaCy as a search engine for your own search portal without data exchange to other peers==如果您的节点运行在'漂流模式', 您能在不与其他节点交换数据的情况下进行搜索 There is no index receive and no index distribution between your peer and any other peer==您不会与其他节点进行索引传递 In case of Robinson-clustering there can be acceptance of remote crawl requests from peers of that cluster==对于漂流群模式,一样会应答那个群内远端节点的爬取请求 >Private Peer==>私有节点 Your search engine will not contact any other peer, and will reject every request==您的搜索引擎不会与其他节点联系, 并会拒绝每一个外部请求 >Public Peer==>公共节点 You are visible to other peers and contact them to distribute your presence==对于其他节点您是可见的, 可以与他们进行通信以分发你的索引 Your peer does not accept any outside index data, but responds on all remote search requests==您的节点不接受任何外部索引数据, 但是会回应所有外部搜索请求 >Public Cluster==>公共群 Your peer is part of a public cluster within the YaCy network==您的节点属于YaCy网络内的一个公共群 Index data is not distributed, but remote crawl requests are distributed and accepted==索引数据不会被分发, 但是外部的爬取请求会被分发和接受 Search requests are spread over all peers of the cluster, and answered from all peers of the cluster==搜索请求在当前群内的所有节点中传播, 并且这些节点同样会作出回应 List of .yacy or .yacyh - domains of the cluster: (comma-separated)==群内.yacy 或者.yacyh 的域名列表: (以逗号隔开) >Peer Tags==>节点标签 When you allow access from the YaCy network, your data is recognized using keywords==当您允许YaCy网络的访问时, 您的数据会以关键字形式表示 Please describe your search portal with some keywords (comma-separated)==请用关键字描述您的搜索门户 (以逗号隔开) If you leave the field empty, no peer asks your peer. If you fill in a '*', your peer is always asked.==如果此部分留空, 那么您的节点不会被其他节点访问. 如果内容是 '*' 则标示您的节点永远被允许访问. "Save"=="保存" #Outgoing communications encryption Outgoing communications encryption==出色的通信加密 Protocol operations encryption==协议操作加密 Prefer HTTPS for outgoing connexions to remote peers==更喜欢以HTTPS作为输出连接到远端节点 When==当 is enabled on remote peers==在远端节点开启时 it should be used to encrypt outgoing communications with them (for operations such as network presence, index transfer, remote crawl==它应该被用来加密与它们的传出通信(操作:网络存在、索引传输、远端爬行 Please note that contrary to strict TLS==请注意,与严格的TLS相反 certificates are not validated against trusted certificate authorities==证书向受信任的证书颁发机构进行验证 thus allowing YaCy peers to use self-signed certificates==从而允许YaCy节点使用自签名证书 Note also that encryption of remote search queries is configured with a dedicated setting in the==另请注意,远端搜索查询加密的专用设置配置请使用 page==页面 No changes were made!==未作出任何改变! Accepted Changes==接受改变 Inapplicable Setting Combination==设置未被应用 #----------------------------- #File: ConfigParser_p.html #--------------------------- Parser Configuration==解析器配置 Content Parser Settings==内容解析器设置 With this settings you can activate or deactivate parsing of additional content-types based on their MIME-types.==此设置能根据文件类型(MIME)开启/关闭额外的内容解析. For a detailed description of the various MIME-types take a look at==关于MIME的详细描述请参考 If you want to test a specific parser you can do so using the==如果要测试特定的解析器,可以使用 >File Viewer<==>文件查看器< >Extension<==>拓展名< >Mime-Type<==>Mime-类型< "Submit"=="提交" PDF Parser Attributes==PDF解析器属性 This is an experimental setting which makes it possible to split PDF documents into individual index entries==这是一个实验设置,可以将PDF文档拆分为单独的索引条目 Every page will become a single index hit and the url is artifically extended with a post/get attribute value containing the page number as value==每个页面都将成为单个索引匹配,并且使用包含页码作为值的post/get属性值人为扩展url Split PDF==分割PDF Property Name==属性名 #----------------------------- #File: ConfigPortal_p.html #--------------------------- Integration of a Search Portal==搜索门户设置 If you like to integrate YaCy as portal for your web pages, you may want to change icons and messages on the search page.==如果您想将YaCy作为您的网站搜索门户, 您可能需要在这改变搜索页面的图标和信息. The search page may be customized.==搜索页面可以自由定制. You can change the 'corporate identity'-images, the greeting line==您可以改变'企业标志'图片,问候语 and a link to a home page that is reached when the 'corporate identity'-images are clicked.==和一个指向首页的'企业标志'图像链接. To change also colours and styles use the Appearance Servlet for different skins and languages.==若要改变颜色和风格,请到外观选项选择您喜欢的皮肤和语言. Greeting Line<==问候语< URL of Home Page<==主页链接< URL of a Small Corporate Image<==企业形象小图地址< URL of a Large Corporate Image<==企业形象大图地址< Alternative text for Corporate Images<==企业形象代替文字< Enable Search for Everyone==对任何人开启搜索 Search is available for everyone==任何人可用搜索 Only the administator is allowed to search==只有管理员可以搜索 Show Navigation Bar on Search Page==显示导航栏和搜索页 Show Navigation Top-Menu==显示顶级导航菜单 no link to YaCy Menu (admin must navigate to /Status.html manually)==没有到YaCy菜单的链接(管理页面必须手动指向 /Status.html) Show Advanced Search Options on Search Page==在搜索页显示高级搜索选项 Show Advanced Search Options on index.html ==在index.html显示高级搜索选项? do not show Advanced Search==不显示高级搜索 Media Search==媒体搜索 >Extended==>拓展 >Strict==>严格 Control whether media search results are as default strictly limited to indexed documents matching exactly the desired content domain==控制媒体搜索结果是否默认严格限制为与所需内容域完全匹配的索引文档 (images, videos or applications specific)==(图片,视频或具体应用) or extended to pages including such medias (provide generally more results, but eventually less relevant).==或扩展到包括此类媒体的网页(通常提供更多结果,但相关性更弱) Remote results resorting==远端搜索结果排序 >On demand, server-side==>按需,服务器端 Automated, with JavaScript in the browser==自动化,在浏览器中使用JavaScript >for authenticated users only<==>仅限经过身份验证的用户< Remote search encryption==远端搜索加密 Prefer https for search queries on remote peers.==首选https用于远端节点上的搜索查询. When SSL/TLS is enabled on remote peers, https should be used to encrypt data exchanged with them when performing peer-to-peer searches.==在远端节点上启用SSL/TLS时,应使用https来加密在执行P2P搜索时与它们交换的数据. Please note that contrary to strict TLS, certificates are not validated against trusted certificate authorities (CA), thus allowing YaCy peers to use self-signed certificates.==请注意,与严格TLS相反,证书不会针对受信任的证书颁发机构(CA)进行验证,因此允许YaCy节点使用自签名证书. >Snippet Fetch Strategy==>摘要提取策略 Speed up search results with this option! (use CACHEONLY or FALSE to switch off verification)==使用此选项加快搜索结果!(使用CACHEONLY或FALSE关闭验证) NOCACHE: no use of web cache, load all snippets online==NOCACHE:不使用网络缓存,在线加载所有网页摘要 IFFRESH: use the cache if the cache exists and is fresh otherwise load online==IFFRESH:如果缓存存在则使用最新的缓存,否则在线加载 IFEXIST: use the cache if the cache exist or load online==IFEXIST:如果缓存存在则使用缓存,或在线加载 If verification fails, delete index reference==如果验证失败,删除索引参考 CACHEONLY: never go online, use all content from cache.==CACHEONLY:永远不上网,内容只来自缓存. If no cache entry exist, consider content nevertheless as available and show result without snippet==如果不存在缓存条目,将内容视为可用,并显示没有摘要的结果 FALSE: no link verification and not snippet generation: all search results are valid without verification==FALSE:没有链接验证且没有摘要生成:所有搜索结果在没有验证情况下有效 Link Verification<==链接验证< Greedy Learning Mode==贪心学习模式 load documents linked in search results,==加载搜索结果中链接的文档, will be deactivated automatically when index size==将自动停用当索引大小 (see==(见 >Heuristics: search-result<==>启发式:搜索结果< to use this permanent)==使得它永久性) Index remote results==索引远端结果 add remote search results to the local index==将远端搜索结果添加到本地索引 ( default=on, it is recommended to enable this option ! )==(默认=开启,建议启用此选项!) Limit size of indexed remote results==现在远端索引结果容量 maximum allowed size in kbytes for each remote search result to be added to the local index==每个远端搜索结果的最大允许大小(以KB为单位)添加到本地索引 for example, a 1000kbytes limit might be useful if you are running YaCy with a low memory setup==例如,如果运行具有低内存设置的YaCy,则1000KB限制可能很有用 Default Pop-Up Page<==默认弹出页面< Default maximum number of results per page==默认每页最大结果数 Default index.html Page (by forwarder)==默认index.html(前者指定) Target for Click on Search Results==点击搜索结果时 "_blank" (new window)=="_blank" (新窗口) "_self" (same window)=="_self" (同一窗口) "_parent" (the parent frame of a frameset)=="_parent" (父级窗口) "_top" (top of all frames)=="_top" (置顶) Special Target as Exception for an URL-Pattern==作为URL模式的异常的特殊目标 Pattern:<=模式:< Exclude Hosts==排除的主机 List of hosts that shall be excluded from search results by default==默认情况下将被排除在搜索结果之外的主机列表 but can be included using the site: operator=但可以使用site:操作符包括进来 'About' Column<=='关于'栏< shown in a column alongside==显示在 with the search result page==搜索结果页侧栏 (Headline)==(标题) (Content)==(内容) >You have to==>你必须 >set a remote user/password<==>设置一个远端用户/密码< to change this options.<==来改变设置.< Show Information Links for each Search Result Entry==显示搜索结果的链接信息 >Date&==>日期& >Size&==>大小& >Metadata&==>元数据& >Parser&==>解析器& >Pictures==>图片 >Status Page==>状态页面 >Search Front Page==>搜索首页 >Search Page (small header)==>搜索页面(二级标题) >Interactive Search Page==>交互搜索页面 "searchresult" (a default custom page name for search results)=="搜索结果" (搜索结果页面名称) "Change Search Page"=="改变搜索页" "Set to Default Values"=="设为默认值" The search page can be integrated in your own web pages with an iframe. Simply use the following code:==使用以下代码,将搜索页集成在你的网站中: This would look like:==示例: For a search page with a small header, use this code:==对于一个拥有二级标题的页面, 可使用以下代码: A third option is the interactive search. Use this code:==交互搜索代码: #----------------------------- #File: ConfigProfile_p.html #--------------------------- Your Personal Profile==您的个人资料 You can create a personal profile here, which can be seen by other YaCy-members==您可以在这创建个人资料, 而且对其他YaCy节点可见 or in the public using a FOAF RDF file.==或者在公共场所时使用FOAF RDF 文件. >Name<==>名字< Nick Name==昵称 Homepage (appears on every Supporter Page as long as your peer is online)==首页(显示在每个支持者 页面中, 前提是您的节点在线). eMail==邮箱 Comment==注释 "Save"=="保存" You can use <==在这里您可以用< > here.==>. #----------------------------- #File: ConfigProperties_p.html #--------------------------- Advanced Config==高级设置 Here are all configuration options from YaCy.==这里显示YaCy所有设置. You can change anything, but some options need a restart, and some options can crash YaCy, if wrong values are used.==您可以改变任何这里的设置, 当然, 有的需要重启才能生效, 有的甚至能引起YaCy崩溃. For explanation please look into defaults/yacy.init==详细内容请参考defaults/yacy.init "Save"=="保存" "Clear"=="清除" #----------------------------- #File: ConfigRobotsTxt_p.html #--------------------------- Exclude Web-Spiders==拒绝网络爬虫 Here you can set up a robots.txt for all webcrawlers that try to access the webinterface of your peer.==在这里您可以创建一个爬虫协议, 以阻止试图访问您节点网络接口的网络爬虫. is a voluntary agreement most search-engines (including YaCy) follow.==是一个大多数搜索引擎(包括YaCy)都遵守的协议. It disallows crawlers to access webpages or even entire domains.==它会阻止网络爬虫进入网页甚至是整个域. Deny access to==禁止访问 Entire Peer==整个节点 Status page==状态页面 Network pages==网络页面 Surftips==上网技巧 News pages==新页面 Blog==博客 Public bookmarks==公共书签 Home Page==首页 File Share==文件共享 Impressum==公司信息 "Save restrictions"=="保存" Wiki==维基 #----------------------------- #File: ConfigSearchBox.html #--------------------------- Integration of a Search Box==搜索框设置 We give information how to integrate a search box on any web page that==如何将一个搜索框集成到任意 calls the normal YaCy search window.==调用YaCy搜索的页面. Simply use the following code:==使用以下代码: MySearch== 我的搜索 "Search"=="搜索" This would look like:==示例: This does not use a style sheet file to make the integration into another web page with a different style sheet easier.==在这里并没有使用样式文件, 因为这样会比较容易将其嵌入到不同样式的页面里. You would need to change the following items:==您可能需要以下条目: Replace the given colors #eeeeee (box background) and #cccccc (box border)==替换已给颜色 #eeeeee (框架背景) 和 #cccccc (框架边框) Replace the word "MySearch" with your own message==用您想显示的信息替换"我的搜索" #----------------------------- #File: ConfigSearchPage_p.html #--------------------------- Search Page<==搜索页< >Search Result Page Layout Configuration<==>搜索结果页面布局配置< Below is a generic template of the search result page. Mark the check boxes for features you would like to be displayed.==以下是搜索结果页面的通用模板.选中您希望显示的功能复选框. To change colors and styles use the ==要改变颜色和样式使用 >Appearance<==>外观< menu for different skins==不同皮肤的菜单 Other portal settings can be adjusted in Generic Search Portal menu.==其他门户网站设置可以在通用搜索门户菜单中调整. >Page Template<==>页面模板< >Text<==>文本< >Images<==>图片< >Audio<==>音频< >Video<==>视频< >Applications<==>应用< >more options<==>更多选项< >Tag<==>标签< >Topics<==>主题< >Cloud<==>云< >Protocol<==>协议< >Filetype<==>文件类型< >Wiki Name Space<==>百科名称空间< >Language<==>语言< >Author<==>作者< >Vocabulary<==>词汇< >Provider<==>提供商< >Collection<==>集合< >Title of Result<==>结果标题< Description and text snippet of the search result==搜索结果的描述和文本片段 42 kbyte<==42kb< >Metadata<==>元数据< >Parser<==>解析器< >Cache<==>缓存< == "Date"=="日期" "Size"=="大小" "Browse index"=="浏览索引" For this option URL proxy must be enabled==对于这个选项,必须启用URL代理 max. items==最大条目数 "Save Settings"=="保存设置" "Set Default Values"=="设置为默认值" "Top navigation bar"=="顶部导航栏" >Location<==>位置< show search results on map==在地图上显示搜索结果 Date Navigation==日期导航 Maximum range (in days)==最大范围 (按照天算) Maximum days number in the histogram. Beware that a large value may trigger high CPU loads both on the server and on the browser with large result sets.==直方图中的最大天数. 请注意, 较大的值可能会在服务器和具有大结果集的浏览器上触发高CPU负载. keyword subject keyword2 keyword3==关键字 主题 关键字2 关键字3 View via Proxy==通过代理查看 >JPG Snapshot<==>JPG快照< "Raw ranking score value"=="原始排名得分值" Ranking: 1.12195955E9==排名: 1.12195955E9 "Delete navigator"=="删除导航器" Add Navigators==添加导航器 "Add navigator"=="添加导航器" >append==>附加 #----------------------------- #File: ConfigUpdate_p.html #--------------------------- >System Update<==>系统更新< >changelog<==>更新日志< > and <==>和< > RSS feed<==> RSS订阅< (unsigned)==(未签名) (signed)==(签名) add the following line to==将以下行添加到 Manual System Update==系统手动升级 Current installed Release==当前版本 Available Releases==可用版本 "Download Release"=="下载更新" "Check for new Release"=="检查更新" Downloaded Releases==已下载 No downloaded releases available for deployment.==无可用更新. no automated installation on development environments==开发环境中自动安装 "Install Release"=="安装更新" "Delete Release"=="删除更新" Automatic Update==自动更新 check for new releases, download if available and restart with downloaded release==检查更新, 如果可用则重启并使用 "Check + Download + Install Release Now"=="检查 + 下载 + 现在安装" Download of release #[downloadedRelease]# finished. Restart Initiated.== 已完成下载 #[downloadedRelease]# . 重启并初始化. No more recent release found.==无最近更新. Release will be installed. Please wait.==准备安装更新. 请稍等. You installed YaCy with a package manager.==您使用包管理器安装的YaCy. To update YaCy, use the package manager:==用包管理器以升级YaCy: Omitting update because this is a development environment.==因当前为开发环境, 忽略安装升级. Omitting update because download of release #[downloadedRelease]# failed.==下载 #[downloadedRelease]# 失败, 忽略安装升级. Automated System Update==系统自动升级 manual update==手动升级 no automatic look-up, updates can be made manually using this interface (see options above)==无自动检查更新时, 可以使用此功能安装更新(参见上述). automatic update==自动更新 updates are made within fixed cycles:==每隔一定时间自动检查更新: Time between lookup==检查周期 hours==小时 Release blacklist==版本黑名单 regex on release number strings==版本号正则表达式 Release type==版本类型 only main releases==仅主版本号 any release including developer releases==任何版本, 包括测试版 Signed autoupdate:==签名升级: only accept signed files==仅接受签名文件 "Submit"=="提交" Accepted Changes.==已接受改变. System Update Statistics==系统升级状况 Last System Lookup==上一次查找更新 never==从未 Last Release Download==最近一次下载更新 Last Deploy==最近一次应用更新 #----------------------------- #File: Connections_p.html #--------------------------- Server Connection Tracking==服务器连接跟踪 Up-Bytes==截至字节 Showing #[numActiveRunning]# active connections from a max. of #[numMax]# allowed incoming connections==正在显示 #[numActiveRunning]# 活动连接,最大允许传入连接 #[numMax]# Connection Tracking==连接跟踪 Incoming Connections==进入连接 Showing #[numActiveRunning]# active, #[numActivePending]# pending connections from a max. of #[numMax]# allowed incoming connections.==显示 #[numActiveRunning]# 活动, #[numActivePending]# 挂起连接, 最大允许 #[numMax]# 个进入连接. Protocol==协议 Duration==持续时间 Source IP[:Port]==来源IP[:端口] Dest. IP[:Port]==目标IP[:端口] Command==命令 Used==使用的 Close==关闭 Waiting for new request nr.==等待新请求数. Outgoing Connections==外出连接 Showing #[clientActive]# pooled outgoing connections used as:==显示 #[clientActive]# 个外出链接, 用作: Duration==持续时间 #ID==ID #----------------------------- #File: ContentAnalysis_p.html #--------------------------- Content Analysis==内容分析 These are document analysis attributes==这些是文档分析属性 Double Content Detection==双重内容检测 Double-Content detection is done using a ranking on a 'unique'-Field, named 'fuzzy_signature_unique_b'.==双内容检测是使用名为'fuzzy_signature_unique_b'的'unique'字段上的排名完成的。 This is the minimum length of a word which shall be considered as element of the signature. Should be either 2 or 3.==这是一个应被视为签名的元素单词的最小长度。 应该是2或3。 The quantRate is a measurement for the number of words that take part in a signature computation. The higher the number, the less==quantRate是参与签名计算的单词数量的度量。 数字越高,越少 words are used for the signature==单词用于签名 For minTokenLen = 2 the quantRate value should not be below 0.24; for minTokenLen = 3 the quantRate value must be not below 0.5.==对于minTokenLen = 2,quantRate值不应低于0.24; 对于minTokenLen = 3,quantRate值必须不低于0.5。 "Re-Set to default"=="重置为默认" "Set"=="设置" Double-Content detection is done using a ranking on a 'unique'-Field==双内容检测是使用名为'fuzzy_signature_unique_b'的'unique'字段上的排名完成的。 The quantRate is a measurement for the number of words that take part in a signature computation. The higher the number==quantRate是参与签名计算的单词数量的度量。 数字越高,越少 #----------------------------- #File: ContentControl_p.html #--------------------------- Content Control<==内容控制< Peer Content Control URL Filter==节点内容控制地址过滤器 With this settings you can activate or deactivate content control on this peer==使用此设置,您可以激活或取消激活此YaCy节点上的内容控制 Use content control filtering:==使用内容控制过滤: >Enabled<==>已启用< Enables or disables content control==启用或禁用内容控制 Use this table to create filter:==使用此表创建过滤器: Define a table. Default:==定义一个表格. 默认: Content Control SMW Import Settings==内容控制SMW导入设置 With this settings you can define the content control import settings. You can define a==使用此设置,您可以定义内容控制导入设置. 你可以定义一个 Semantic Media Wiki with the appropriate extensions==语义媒体百科与适当的扩展 SMW import to content control list:==SMW导入到内容控制列表: Enable or disable constant background synchronization of content control list from SMW (Semantic Mediawiki). Requires restart!==启用或禁用来自SMW(Semantic Mediawiki)的内容控制列表的恒定后台同步。 需要重启! SMW import base URL:==SMW导入基URL: Define base URL for SMW special page "Ask". Example: ==为SMW特殊页面“Ask”定义基础地址.例: SMW import target table:==SMW导入目标表: Define import target table. Default: contentcontrol==定义导入目标表. 默认值:contentcontrol Purge content control list on initial sync:==在初始同步时清除内容控制列表: Purge content control list on initial synchronisation after startup.==重启后,清除初始同步的内容控制列表. "Submit"=="提交" Define base URL for SMW special page "Ask". Example:==为SMW特殊页面“Ask”定义基础地址.例: #----------------------------- #File: ContentIntegrationPHPBB3_p.html #--------------------------- Content Integration: Retrieval from phpBB3 Databases==内容集成: 从phpBB3数据库中导入 It is possible to extract texts directly from mySQL and postgreSQL databases.==能直接从mysql或者postgresql中解压出内容. Each extraction is specific to the data that is hosted in the database.==每次解压都针对主机数据库中的数据. This interface gives you access to the phpBB3 forums software content.==通过此接口能访问phpBB3论坛软件内容. If you read from an imported database, here are some hints to get around problems when importing dumps in phpMyAdmin:==如果从使用phpMyAdmin读取数据库内容, 您可能会用到以下建议: before importing large database dumps, set==在导入尺寸较大的数据库时, in phpmyadmin/config.inc.php and place your dump file in /tmp (Otherwise it is not possible to upload files larger than 2MB)==设置phpmyadmin/config.inc.php的内容, 并将您的数据库文件放到 /tmp 目录下(否则不能上传大于2MB的文件) deselect the partial import flag==取消部分导入 When an export is started, surrogate files are generated into DATA/SURROGATE/in which are automatically fetched by an indexer thread.==导出过程开始时, 在 DATA/SURROGATE/in 目录下自动生成备份文件, 并且会被索引器自动爬取. All indexed surrogate files are then moved to DATA/SURROGATE/out and can be re-cycled when an index is deleted.==所有被索引的备份文件都在 DATA/SURROGATE/out 目录下, 并被索引器循环利用. The URL stub==URL根域名 like http://forum.yacy-websuche.de==比如链接 http://forum.yacy-websuche.de this must be the path right in front of '/viewtopic.php?'==必须在'/viewtopic.php?'前面 Type==数据库 > of database<==> 类型< use either 'mysql' or 'pgsql'==使用'mysql'或者'pgsql' Host==数据库 > of the database<==> 主机名< of database service==数据库服务 usually 3306 for mySQL==MySQL中通常是3306 Name of the database==主机 on the host==数据库 Table prefix string==table for table names==前缀 User==数据库 that can access the database==用户名 Password==给定用户名的 for the account of that user given above==访问密码 Posts per file==导出备份中 in exported surrogates==每个文件拥有的最多帖子数 Check database connection==检查数据库连接 Export Content to Surrogates==导出到备份 Import a database dump==导入数据库 Import Dump==导入 Posts in database==数据库中帖子 first entry==第一个 last entry==最后一个 Info failed:==错误信息: Export successful! Wrote #[files]# files in DATA/SURROGATES/in==导出成功! #[files]# 已写入到 DATA/SURROGATES/in 目录 Export failed:==导出失败: Import successful!==导入成功! Import failed:==导入失败: #----------------------------- #File: CookieMonitorIncoming_p.html #--------------------------- Incoming Cookies Monitor==进入Cookies监视器 Cookie Monitor: Incoming Cookies==Cookies监视器: 进入Cookies This is a list of Cookies that a web server has sent to clients of the YaCy Proxy:==Web服务器已向YaCy代理客户端发送的Cookie: Showing #[num]# entries from a total of #[total]# Cookies.==显示 #[num]# 个条目, 总共 #[total]# 条Cookies. Sending Host==发送中的主机 Date==日期 Receiving Client==接收中的客户端 >Cookie<==>Cookie< "Enable Cookie Monitoring"=="开启Cookie监视" "Disable Cookie Monitoring"=="关闭Cookie监视" #----------------------------- #File: CookieMonitorOutgoing_p.html #--------------------------- Outgoing Cookies Monitor==外出Cookie监视器 Cookie Monitor: Outgoing Cookies==Cookie监视器: 外出Cookie This is a list of cookies that browsers using the YaCy proxy sent to webservers:==YaCy代理以通过浏览器向Web服务器发送的Cookie: Showing #[num]# entries from a total of #[total]# Cookies.==显示 #[num]# 个条目, 总共 #[total]# 条Cookie. Receiving Host==接收中的主机 Date==日期 Sending Client==发送中的客户端 >Cookie<==>Cookie< "Enable Cookie Monitoring"=="开启Cookie监视" "Disable Cookie Monitoring"=="关闭Cookie监视" #----------------------------- #File: CookieTest_p.html #--------------------------- Cookie - Test Page==缓存 - 测试页 Here is a cookie test page.==这是一个缓存测试页. Just clean it==Just clean it Name:==Name: Value:==Value: Dear server, set this cookie for me!==Dear server, set this cookie for me! Cookies at this browser:==Cookies at this browser: Cookies coming to server:==Cookies coming to server: Cookies server sent:==Cookies server sent: YaCy is a GPL'ed project==YaCy is a GPL'ed project with the target of implementing a P2P-based global search engine.==with the target of implementing a P2P-based global search engine. Architecture (C) by==Architecture (C) by #----------------------------- #File: CrawlCheck_p.html #--------------------------- Crawl Check==爬取检查 This pages gives you an analysis about the possible success for a web crawl on given addresses.==通过本页面,您可以分析在特定地址上进行网络爬取的可能性。 List of possible crawl start URLs==可行的起始抓行网址列表 "Check given urls"=="检查给定的网址" >Analysis<==>分析< >Access<==>访问< >Robots<==>机器人< >Crawl-Delay<==>爬取延时< >Sitemap<==>网页< #----------------------------- #File: CrawlProfileEditor_p.html #--------------------------- Crawl Profile Editor==爬取配置文件编辑器 >Crawl Profile Editor<==>爬取文件编辑< >Crawler Steering<==>爬虫向导< >Crawl Scheduler<==>爬取调度器< >Scheduled Crawls can be modified in this table<==>请在下表中修改已安排的爬取< Crawl profiles hold information about a crawl process that is currently ongoing.==爬取文件里保存有正在运行的爬取进程信息. #Crawl profiles hold information about a specific URL which is internally used to perform the crawl it belongs to.==Crawl Profile enthalten Informationen über eine spezifische URL, welche intern genutzt wird, um nachzuvollziehen, wozu der Crawl gehört. #The profiles for remote crawls, indexing via proxy and snippet fetches==Die Profile für Remote Crawl, Indexierung per Proxy und Snippet Abrufe #cannot be altered here as they are hard-coded.==können nicht verändert werden, weil sie "hard-coded" sind. #Crawl Profile List Crawl Profile List==爬取文件列表 Crawl Thread<==爬取线程< >Collections<==>搜集< >Status<==>状态< >Depth<==>深度< Must Match<==必须匹配< >Must Not Match<==>必须不符< >Recrawl if older than<==>重新爬取如果老于< >Domain Counter Content<==>域计数器内容< >Max Page Per Domain<==>每个域中拥有最大页面< >Accept==>接受 URLs<==地址< >Fill Proxy Cache<==>填充代理缓存< >Local Text Indexing<==>本地文本索引< >Local Media Indexing<==>本地媒体索引< >Remote Indexing<==>远端索引< MaxAge<==最长寿命< no::yes==否::是 Running==运行中 "Terminate"=="终结" Finished==已完成 "Delete"=="删除" "Delete finished crawls"=="删除已完成的爬取进程" Select the profile to edit==选择要修改的文件 "Edit profile"=="修改文件" An error occurred during editing the crawl profile:==修改爬取文件时发生错误: Edit Profile==修改文件 "Submit changes"=="提交改变" #----------------------------- #File: CrawlResults.html #--------------------------- Crawl Results<==爬取结果< >Crawl Results Overview<==>爬取结果概述< These are monitoring pages for the different indexing queues.==这是索引创建队列的监视页面. YaCy knows 5 different ways to acquire web indexes. The details of these processes (1-5) are described within the submenu's listed==YaCy使用5种不同的方式来获取网络索引. 详细描述显示在子菜单的进程(1-5)中, above which also will show you a table with indexing results so far. The information in these tables is considered as private,==以上列表也会显示目前的索引结果. 表中的信息是私有的, so you need to log-in with your administration password.==所以您需要以管理员账户来查看. Case (6) is a monitor of the local receipt-generator, the opposed case of (1). It contains also an indexing result monitor but is not considered private==事件(6)是本地回执生成器的监视器, (1)的相反事件. 它也包含一个索引结果监视器, 但不是私有的. since it shows crawl requests from other peers.==因为它显示了来自其他节点的爬取请求. Case (7) occurs if surrogate files are imported==事件(7)发生在导入备份文件时 The image above illustrates the data flow initiated by web index acquisition.==上图解释了由网页索引查询发起的数据流. Some processes occur double to document the complex index migration structure.==某些进程发生了两次以记录复杂的索引迁移结构. (1) Results of Remote Crawl Receipts==(1) 远端爬取回执的结果 This is the list of web pages that this peer initiated to crawl,==这是此节点发起爬取的网页列表, but had been crawled by other peers.==但它们早已被 其他 节点爬取了. This is the 'mirror'-case of process (6).==这是进程(6)的'镜像'事件. Use Case: You get entries here, if you start a local crawl on the 'Advanced Crawler' page and check the==用法: 你可在此获得条目, 当你在 '高级爬虫页面 上启动本地爬取并勾选 'Do Remote Indexing'-flag, and if you checked the 'Accept Remote Crawl Requests'-flag on the 'Remote Crawling' page.=='执行远端索引'-标志时, 这需要你确保在 '远端爬取' 页面中勾选了'接受远端爬取请求'-标志. Every page that a remote peer indexes upon this peer's request is reported back and can be monitored here.==远端节点根据此节点的请求编制索引的每个页面都会被报告回来,并且可以在此处进行监视. (2) Results for Result of Search Queries==(2) 搜索查询结果报告页 This index transfer was initiated by your peer by doing a search query.==通过搜索, 此索引转移能被发起. The index was crawled and contributed by other peers.==这个索引是被其他节点贡献与爬取的. Use Case: This list fills up if you do a search query on the 'Search Page'==用法: 如果您在'搜索页面'上执行搜索查询,此列表将填满 (3) Results for Index Transfer==(3) 索引转移结果 The url fetch was initiated and executed by other peers.==这些取回本地的地址是被其他节点发起并爬取. These links here have been transmitted to you because your peer is the most appropriate for storage according to==程序已将这些地址传递给你, 因为根据全球分布哈希表的逻辑, the logic of the Global Distributed Hash Table.==您的节点是最适合存储它们的. Use Case: This list may fill if you check the 'Index Receive'-flag on the 'Index Control' page==用法: 如果您在'索引控制'页面上选中'索引接收'-标志, 则此列表会填写 (4) Results for Proxy Indexing==(4) 代理索引结果 These web pages had been indexed as result of your proxy usage.==以下是由于使用代理而索引的网页. No personal or protected page is indexed==不包括私有或受保护网页 such pages are detected by Cookie-Use or POST-Parameters (either in URL or as HTTP protocol)==通过检测cookie用途和提交参数(链接或者HTTP协议)能够识别出此类网页, and automatically excluded from indexing.==并在索引时自动排除. Use Case: You must use YaCy as proxy to fill up this table.==用法: 必须把YaCy用作代理才能填充此表格. Set the proxy settings of your browser to the same port as given==将浏览器代理端口设置为 on the 'Settings'-page in the 'Proxy and Administration Port' field.=='设置'页面'代理和管理端口'选项中的端口. (5) Results for Local Crawling==(5)本地爬取结果 These web pages had been crawled by your own crawl task.==这些网页按照您的爬虫任务已被爬取. Use Case: start a crawl by setting a crawl start point on the 'Index Create' page.==用法: 在'索引创建'页面设置爬取起始点以开始爬取. (6) Results for Global Crawling==(6)全球爬取结果 These pages had been indexed by your peer, but the crawl was initiated by a remote peer.==这些网页已被您的节点创建了索引, 但它们是被远端节点爬取的. This is the 'mirror'-case of process (1).==这是进程(1)的'镜像'事件. Use Case: This list may fill if you check the 'Accept Remote Crawl Requests'-flag on the 'Remote Crawling' page==用法: 如果你在 '远端爬取' 页面勾选'接受远端爬取请求'-标记,此列表会填写 The stack is empty.==此栈为空. Statistics about #[domains]# domains in this stack:==此栈显示有关 #[domains]# 域的数据: (7) Results from surrogates import==(7) 备份导入结果 These records had been imported from surrogate files in DATA/SURROGATES/in==这些记录从 DATA/SURROGATES/in 中的备份文件中导入 Use Case: place files with dublin core metadata content into DATA/SURROGATES/in or use an index import method==将包含Dublin核心元数据的文件放在 DATA/SURROGATES/in 中, 或者使用索引导入方式 (i.e. MediaWiki import, OAI-PMH retrieval)==(例如 MediaWiki 导入, OAI-PMH 导入) >Domain==>域名 "delete all"=="全部删除" Showing all #[all]# entries in this stack.==显示栈中所有 #[all]# 条目. Showing latest #[count]# lines from a stack of #[all]# entries.==显示栈中 #[all]# 条目的最近 "clear list"=="清除列表" >Executor==>执行者 >Modified==>已修改 >Words==>单词 >Title==>标题 "delete"=="删除" >Collection==>集合 Blacklist to use==使用的黑名单 "del & blacklist"=="删除并拉黑" on the 'Settings'-page in the 'Proxy and Administration Port' field.==在'设置'-页面的'代理和管理端口'字段的上。 #----------------------------- #File: CrawlStartExpert.html #--------------------------- == Expert Crawl Start==高级爬取设置 Start Crawling Job:==开始爬取任务: You can define URLs as start points for Web page crawling and start crawling here==您可以将指定地址作为爬取网页的起始点 "Crawling" means that YaCy will download the given website, extract all links in it and then download the content behind these links== "爬取中"意即YaCy会下载指定的网站, 并解析出网站中链接的所有内容 This is repeated as long as specified under "Crawling Depth"==它将一直重复至到满足指定的"爬取深度" A crawl can also be started using wget and the==爬取也可以将wget和 for this web page==用于此网页 #Crawl Job >Crawl Job<==>爬取工作< A Crawl Job consist of one or more start point, crawl limitations and document freshness rules==爬取作业由一个或多个起始点、爬取限制和文档新鲜度规则组成 #Start Point >Start Point==>起始点 Define the start-url(s) here.==在这儿确定起始地址. You can submit more than one URL, each line one URL please.==你可以提交多个地址,请一行一个地址. Each of these URLs are the root for a crawl start, existing start URLs are always re-loaded.==每个地址中都是爬取开始的根,已有的起始地址会被重新加载. Other already visited URLs are sorted out as "double", if they are not allowed using the re-crawl option.==对已经访问过的地址,如果它们不允许被重新爬取,则被标记为'重复'. One Start URL or a list of URLs:==一个起始地址或地址列表: (must start with==(头部必须有 >From Link-List of URL<==>来自地址的链接列表< From Sitemap==来自站点地图 From File (enter a path==来自文件(输入 within your local file system)<==你本地文件系统的地址)< #Crawler Filter >Crawler Filter==>爬虫过滤器 These are limitations on the crawl stacker. The filters will be applied before a web page is loaded==这些是爬取堆栈器的限制.将在加载网页之前应用过滤器 This defines how often the Crawler will follow links (of links..) embedded in websites.==此选项为爬虫跟踪网站嵌入链接的深度. 0 means that only the page you enter under "Starting Point" will be added==设置为0代表仅将"起始点" to the index. 2-4 is good for normal indexing. Values over 8 are not useful, since a depth-8 crawl will==添加到索引.建议设置为2-4.由于设置为8会索引将近256亿个页面,所以不建议设置大于8的值, index approximately 25.600.000.000 pages, maybe this is the whole WWW.==这可能是整个互联网的内容. >Crawling Depth<==>爬取深度< also all linked non-parsable documents==还包括所有链接的不可解析文档 >Unlimited crawl depth for URLs matching with<==>不限爬取深度,对这些匹配的网址< >Maximum Pages per Domain<==>每个域名最大页面数< Use:==使用: Page-Count==页面数 You can limit the maximum number of pages that are fetched and indexed from a single domain with this option.==使用此选项,您可以限制将从单个域名中爬取和索引的页面数. You can combine this limitation with the 'Auto-Dom-Filter', so that the limit is applied to all the domains within==您可以将此设置与'Auto-Dom-Filter'结合起来, 以限制给定深度中所有域名. the given depth. Domains outside the given depth are then sorted-out anyway.==超出深度范围的域名会被自动忽略. >misc. Constraints<==>其余约束< A questionmark is usually a hint for a dynamic page.==动态页面常用问号标记. URLs pointing to dynamic content should usually not be crawled.==通常不会爬取指向动态页面的地址. However, there are sometimes web pages with static content that==然而,也有些含有静态内容的页面用问号标记. is accessed with URLs containing question marks. If you are unsure, do not check this to avoid crawl loops.==如果您不确定,不要选中此项以防爬取时陷入死循环. Accept URLs with query-part ('?')==接受具有查询格式('?')的地址 Obey html-robots-noindex:==遵守html-robots-noindex: Obey html-robots-nofollow:==遵守html-robots-nofollow: Media Type detection==媒体类型探测 Do not load URLs with an unsupported file extension==不加载具有不支持文件拓展名的地址 Always cross check file extension against Content-Type header==始终针对Content-Type标头交叉检查文件扩展名 >Load Filter on URLs<==>对地址加载过滤器< > must-match<==>必须匹配< The filter is a <==这个过滤器是一个< >regular expression<==>正则表达式< Example: to allow only urls that contain the word 'science', set the must-match filter to '.*science.*'.==列如:只允许包含'science'的地址,就在'必须匹配过滤器'中输入'.*science.*'. You can also use an automatic domain-restriction to fully crawl a single domain.==您也可以使用主动域名限制来完全爬取单个域名. Attention: you can test the functionality of your regular expressions using the==注意:你可测试你的正则表达式功能使用 >Regular Expression Tester<==>正则表达式测试器< within YaCy.==在YaCy中. Restrict to start domain==限制起始域 Restrict to sub-path==限制子路经 Use filter==使用过滤器 (must not be empty)==(不能为空) > must-not-match<==>必须排除< >Load Filter on IPs<==>对IP加载过滤器< >Must-Match List for Country Codes<==>国家代码必须匹配列表< Crawls can be restricted to specific countries.==可以限制只在某个具体国家爬取. This uses the country code that can be computed from==这会使用国家代码, 它来自 the IP of the server that hosts the page.==该页面所在主机的IP. The filter is not a regular expressions but a list of country codes,==这个过滤器不是正则表达式,而是 separated by comma.==由逗号隔开的国家代码列表. >no country code restriction<==>没有国家代码限制< #Document Filter >Document Filter==>文档过滤器 These are limitations on index feeder.==这些是索引进料器的限制. The filters will be applied after a web page was loaded.==加载网页后将应用过滤器. that must not match with the URLs to allow that the content of the url is indexed.==它必须排除这些地址,从而允许地址中的内容被索引. >Filter on URLs<==>地址过滤器< >Filter on Content of Document<==>文档内容过滤器< >(all visible text, including camel-case-tokenized url and title)<==>(所有可见文本,包括camel-case-tokenized的网址和标题)< >Filter on Document Media Type (aka MIME type)<==>文档媒体类型过滤器(又称MIME类型)< >Solr query filter on any active <==>Solr查询过滤器对任何有效的< >indexed<==>索引的< > field(s)<==>域< #Content Filter >Content Filter==>内容过滤器 These are limitations on parts of a document.==这些是文档部分的限制. The filter will be applied after a web page was loaded.==加载网页后将应用过滤器. >Filter div or nav class names<==>div或nav类名过滤器< >set of CSS class names<==>CSS类名集合< #comma-separated list of
or

==建议 Surftips are switched off==建议已关闭 title="bookmark"==title="书签" alt="Add to bookmarks"==alt="添加到书签" title="positive vote"==title=="好评" alt="Give positive vote"==alt="给予好评" title="negative vote"==title=="差评" alt="Give negative vote"==alt="给予差评" YaCy Supporters<==YaCy参与者< >a list of home pages of yacy users<==>显示YaCy用户< provided by YaCy peers using public bookmarks, link votes and crawl start points==由使用公共书签, 网址评价和爬取起始点的节点提供 "Please enter a comment to your link recommendation. (Your Vote is also considered without a comment.)"=="输入推荐链接备注. (可留空.)" "authentication required"=="需要认证" Hide surftips for users without autorization==隐藏非认证用户的建议功能 Show surftips to everyone==所有人均可使用建议 #----------------------------- #File: Table_API_p.html #--------------------------- : Peer Steering==: 节点向导 The information that is presented on this page can also be retrieved as XML.==The information that is presented on this page can also be retrieved as XML. Click the API icon to see the XML.==Click the API icon to see the XML. To see a list of all APIs, please visit the ==To see a list of all APIs, please visit the API wiki page==API wiki page >Process Scheduler<==>进程调度器< This table shows actions that had been issued on the YaCy interface==此表显示YaCy用于 to change the configuration or to request crawl actions.==改变配置或者处理爬取请求的动作接口函数. These recorded actions can be used to repeat specific actions and to send them==它们用于重复执行某一指定动作, to a scheduler for a periodic execution.==或者用于周期执行一系列动作. >Recorded Actions<==>已记录的动作< "next page"=="下一页" "previous page"=="上一页" of #[of]#== 共 #[of]# >Type==>类型 >Comment==>注释 Call Count<==调用次数< Recording Date==记录的日期 Last Exec Date==上次执行日期 Next Exec Date==下次执行日期 >Event Trigger<==>事件触发器< "clone"=="clone" >Scheduler<==>定时器< >no event<==>无事件< >activate event<==>激活事件< >no repetition<==>不重复< >activate scheduler<==>激活定时器< >off<==>关闭< >run once<==>执行一次< >run regular<==>定期执行< >after start-up<==>在启动后< "Execute Selected Actions"=="执行选中的行为" "Delete Selected Actions"=="删除选中的行为" "Delete all Actions which had been created before "=="删除创建于之前的全部行为" day<==天< days<==天< week<==周< weeks<==周< month<==月< months<==月< year<==年< years<==年< >Result of API execution==>API执行结果 >minutes<==>分钟< >hours<==>小时< Scheduled actions are executed after the next execution date has arrived within a time frame of #[tfminutes]# minutes.==已安排动作会在 #[tfminutes]# 分钟后执行. To see a list of all APIs, please visit the==To see a list of all APIs, please visit the #----------------------------- #File: Table_RobotsTxt_p.html #--------------------------- API wiki page==API百科页面 To see a list of all APIs, please visit the==要查看所有API的列表,请访问 To see a list of all APIs==要查看所有API的列表 Table Viewer==表格查看 The information that is presented on this page can also be retrieved as XML.==此页信息也可表示为XML. Click the API icon to see the XML.==点击API图标查看XML. To see a list of all APIs, please visit the API wiki page.==查看所有API, 请访问API Wiki. >robots.txt table<==>爬虫协议列表< #----------------------------- #File: Table_YMark_p.html #--------------------------- Table Viewer==表格查看 YMark Table Administration==YMark表格管理 Table Editor: showing table==表格编辑器: 显示表格 "Edit Selected Row"=="编辑选中行" "Add a new Row"=="添加新行" "Delete Selected Rows"=="删除选中行" "Delete Table"=="删除表格" "Rebuild Index"=="重建索引" Primary Key==主键 >Row Editor<==>行编辑器< "Commit"=="备注" Table Selection==选择表格 Select Table:==选择表格: show max. entries==显示最多条目 >all<==>所有< Display columns:==显示列: "load"=="载入" Search/Filter Table==搜索/过滤表格 search rows for==搜索 "Search"=="搜索" #>Tags<==>Tags< >select a tag<==>选择标签< >Folders<==>目录< >select a folder<==>选择目录< >Import Bookmarks<==>导入书签< #Importer:==Importer: #>XBEL Importer<==>XBEL Importer< #>Netscape HTML Importer<==>Netscape HTML Importer< "import"=="导入" #----------------------------- ### This Tables section is removed in current SVN Versions #File: Tables_p.html #--------------------------- Table Viewer==表查看器 entries==条目 Table Administration==表格管理 Table Selection==选择表格 Select Table:==选择表格: #"Show Table"=="Zeige Tabelle" show max.==显示最多. >all<==>全部< entries,==个条目, search rows for==搜索内容 "Search"=="搜索" Table Editor: showing table==表格编辑器: 显示表格 #PK==Primärschlüssel "Edit Selected Row"=="编辑选中行" "Add a new Row"=="添加新行" "Delete Selected Rows"=="删除选中行" "Delete Table"=="删除表格" Row Editor==行编辑器 Primary Key==主键 "Commit"=="备注" #----------------------------- #File: Threaddump_p.html #--------------------------- YaCy Debugging: Thread Dump==YaCy Debug: 线程Dump Threaddump<==线程Dump< "Single Threaddump"=="单线程Dump" "Multiple Dump Statistic"=="多个Dump数据" #"create Threaddump"=="Threaddump erstellen" #----------------------------- #File: TransNews_p.html #--------------------------- Translation News for Language==语言翻译新闻 Translation News==翻译新闻 You can share your local addition to translations and distribute it to other peers.==你可以分享你的本地翻译,并分发给其他节点。 The remote peer can vote on your translation and add it to the own local translation.==远端节点可以对您的翻译进行投票并将其添加到他们的本地翻译中。 entries available==可用的条目 "Publish"=="发布" You can check your outgoing messages==你可以检查你的传出消息 >here<==>这儿< To edit or add local translations you can use==要编辑或添加本地翻译,你可以用 File:==文件: Translation:==翻译: >score==>分数 negative vote==反对票 positive vote==赞成票 Vote on this translation==对这个翻译投票 If you vote positive the translation is added to your local translation list==如果您投赞成票,翻译将被添加到您的本地翻译列表中 >Originator<==>启动人< #----------------------------- #File: Translator_p.html #--------------------------- Translation Editor==翻译编辑器 Translate untranslated text of the user interface (current language).==翻译用户界面中未翻译的文本(当前语言)。 UI Translation==界面翻译 Target Language:==目标语言 activate a different language==激活另一种语言 Source File==源文件 view it==查看 filter untranslated==列出未翻译项 Source Text==源文 Translated Text==翻译 Save translation==保存翻译 The modified translation file is stored in DATA/LOCALE directory.==修改的翻译文件储存在 DATA/LOCALE 目录下 #----------------------------- #File: User.html #--------------------------- User Page==用户页面 You are not logged in.
==当前未登录.
Username:==用户名: Password: Get URL Viewer<==>获取地址查看器< >URL Metadata<==>地址元数据< URL==地址 #Hash==Hash Word Count==字数 Description==描述 Size==大小 View as==查看形式 #Original==Original Plain Text==文本 Parsed Text==解析文本 Parsed Sentences==解析句子 Parsed Tokens/Words==解析令牌/字 Link List==链接列表 "Show"=="显示" Unable to find URL Entry in DB==无法找到数据库中的链接. Invalid URL==无效链接 Unable to download resource content.==无法下载资源内容. Unable to parse resource content.==无法解析资源内容. Unsupported protocol.==不支持的协议. >Original Content from Web<==>网页原始内容< Parsed Content==解析内容 >Original from Web<==>网页原始内容< >Original from Cache<==>缓存原始内容< >Parsed Tokens<==>解析令牌< #----------------------------- #File: ViewLog_p.html #--------------------------- Server Log==服务器日志 Lines==行 reversed order==倒序排列 "refresh"=="刷新" #----------------------------- #File: ViewProfile.html #--------------------------- Local Peer Profile:==本地节点资料: Remote Peer Profile==远端节点资料 Wrong access of this page==页面权限错误 The requested peer is unknown or a potential peer.==所请求节点未知或者是潜在节点. The profile can't be fetched.==无法获取资料. The peer==节点 is not online.==当前不在线. This is the Profile of==资料 #Name==Name #Nick Name==Nick Name #Homepage==Homepage #eMail==eMail #ICQ==ICQ #Jabber==Jabber #Yahoo!==Yahoo! #MSN==MSN #Skype==Skype Comment==注释 View this profile as==查看方式 > or==> 或者 #vCard==vCard #----------------------------- #File: Vocabulary_p.html #--------------------------- >Vocabulary Administration<==>词汇管理< Vocabularies can be used to produce a search navigation.==词汇表可用于生成搜索导航. A vocabulary must be created before content is indexed.==必须在索引内容之前创建词汇. The vocabulary is used to annotate the indexed content with a reference to the object that is denoted by the term of the vocabulary.==词汇用于通过引用由词汇的术语表示的对象来注释索引的内容. The object can be denoted by a url stub that, combined with the term, becomes the url for the object.==该对象可以用地址存根表示,该存根与该术语一起成为该对象的地址. >Vocabulary Selection<==>词汇选择< >Vocabulary Name<==>词汇名< "View"=="查看" >Vocabulary Production<==>词汇生成< Empty Vocabulary== 空词汇 >Auto-Discover<==>自动发现< > from file name==> 来自文件名 > from page title (splitted)==> 来自页面标题(拆分) > from page title==> 来自页面标题 > from page author==> 来自页面作者 >Objectspace<==>对象空间< It is possible to produce a vocabulary out of the existing search index.==可以从现有搜索索引中生成词汇表. This is done using a given 'objectspace' which you can enter as a URL Stub.==这是使用给定的“对象空间”完成的,您可以将其作为地址存根输入. This stub is used to find all matching URLs.==此存根用于查找所有匹配的地址. If the remaining path from the matching URLs then denotes a single file, the file name is used as vocabulary term.==如果来自匹配地址的剩余路径表示单个文件,则文件名用作词汇表术语. This works best with wikis.==这适用于百科. Try to use a wiki url as objectspace path.==尝试使用百科地址作为对象空间路径 Import from a csv file==从csv文件导入 >File Path or==>文件路径或者 >Start line<==>起始行< >Column for Literals<==>文本列< >Synonyms<==>同义词< >no Synonyms<==>无同义词< >Auto-Enrich with Synonyms from Stemming Library<==>使用词干库中的同义词自动丰富< >Read Column<==>读取列< >Column for Object Link (optional)<==>对象链接列(可选)< >Charset of Import File<==>导入文件字符集< >Column separator<==>列分隔符< "Create"=="创建" #----------------------------- #File: WatchWebStructure_p.html #--------------------------- >Text<==>文本< >Pivot Dot<==>枢轴点< "WebStructurePicture"=="网页结构图" >Other Dot<==>其他点< API wiki page==API 百科页面 To see a list of all APIs, please visit the==要查看所有API的列表, 请访问 >Host List<==>主机列表< To see a list of all APIs==要查看所有API的列表 The data that is visualized here can also be retrieved in a XML file, which lists the reference relation between the domains.==此页面数据显示域之间的关联关系, 能以XML文件形式查看. With a GET-property 'about' you get only reference relations about the host that you give in the argument field for 'about'.==使用GET属性'about'仅能获得带有'about'参数的域关联关系. With a GET-property 'latest' you get a list of references that had been computed during the current run-time of YaCy, and with each next call only an update to the next list of references.==使用GET属性'latest'能获得当前的关联关系列表, 并且每一次调用都只能更新下一级关联关系列表. Click the API icon to see the XML file.==点击API图标查看XML文件. To see a list of all APIs, please visit the API wiki page.==查看所有API, 请访问API Wiki. Web Structure==网页结构 host<==主机< depth<==深度< nodes<==节点< time<==时间< size<==大小< >Background<==>背景< >Line<==>线< >Dot<==>点< >Dot-end<==>末点< >Color <==>颜色< "change"=="改变" #----------------------------- #File: Wiki.html #--------------------------- YaCyWiki page:==YaCyWiki: last edited by==最后编辑由 change date==改变日期 Edit<==编辑< only granted to admin==只授权给管理员 Grant Write Access to==授予写权限 # !!! Do not translate the input buttons because that breaks the function to switch rights !!! #"all"=="Allen" #"admin"=="Administrator" Start Page==开始页面 Index==索引 Versions==版本 Author:==作者: #Text:==Text: You can use==您可以在这使用 Wiki Code here.==wiki代码. "edit"=="编辑" "Submit"=="提交" "Preview"=="预览" "Discard"=="取消" >Preview==>预览 No changes have been submitted so far!==未提交任何改变! Subject==主题 Change Date==改变日期 Last Author==最后作者 IO Error reading wiki database:==读取wiki数据库时出现IO错误: Select versions of page==选择页面版本 Compare version from==原始版本 "Show"=="显示" with version from==对比版本 "current"=="当前" "Compare"=="对比" Return to==返回 Changes will be published as announcement on YaCyNews==改变会被发布在YaCy新闻中. #----------------------------- #File: WikiHelp.html #--------------------------- to embed this video:==嵌入此视频: Text will be displayed underlined.==文本要显示下划线. Code==代码 This tag displays a Youtube or Vimeo video with the id specified and fixed width 425 pixels and height 350 pixels.==这个标签显示一个425像素和350像素的Youtube或Vimeo视频. i.e. use==比如用 Wiki Help==Wiki帮助 Wiki-Code==Wiki代码 This table contains a short description of the tags that can be used in the Wiki and several other servlets==此表列出了用于Wiki和几个插件代码标签简述, of YaCy. For a more detailed description visit the==详情请见 #YaCy Wiki==YaCy Wiki Description==描述 #=headline===headline These tags create headlines. If a page has three or more headlines, a table of content will be created automatically.==此标记标识标题内容. 如果页面有多于三个标题, 则会自动创建一个表格. Headlines of level 1 will be ignored in the table of content.==一级标题. #text==Text These tags create stressed texts. The first pair emphasizes the text (most browsers will display it in italics),==这些标记标识文本内容. 第一对中为强调内容(多数浏览器用斜体表示), the second one emphazises it more strongly (i.e. bold) and the last tags create a combination of both.==第二对用粗体表示, 第三对为两者的联合. Text will be displayed stricken through.==文本内容以删除线表示. Lines will be indented. This tag is supposed to mark citations, but may as well be used for styling purposes.==缩进内容, 此标记主要用于引用, 也能用于标识样式. #point==point These tags create a numbered list.==此标记用于有序列表. #something<==something< #another thing==another thing #and yet another==and yet another #something else==something else These tags create an unnumbered list.==用于创建无序列表. #word==word #:definition==:definition These tags create a definition list.==用于创建定义列表. This tag creates a horizontal line.==创建水平线. #pagename==pagename #description]]==description]] This tag creates links to other pages of the wiki.==创建到其他wiki页面的链接. This tag displays an image, it can be aligned left, right or center.==显示图片, 可设置左对齐, 右对齐和居中. These tags create a table, whereas the first marks the beginning of the table, the second starts==用于创建表格, 第一个标记为表格开头, 第二个为换行, a new line, the third and fourth each create a new cell in the line. The last displayed tag==第三个与第四个创建列. closes the table.==最后一个为表格结尾. #The escape tags will cause all tags in the text between the starting and the closing tag to not be treated as wiki-code.==Durch diesen Tag wird der Text, der zwischen den Klammern steht, nicht interpretiert und unformatiert als normaler Text ausgegeben. A text between these tags will keep all the spaces and linebreaks in it. Great for ASCII-art and program code.==此标记之间的文本会保留所有空格和换行, 主要用于ASCII艺术图片和编程代码. If a line starts with a space, it will be displayed in a non-proportional font.==如果一行以空格开头, 则会以非比例形式显示. url description==URL描述 This tag creates links to external websites.==此标记创建外部网站链接. alt text==文本备案 #----------------------------- #File: YMarks.html #--------------------------- "Import"=="导入" documents=="文件" days==天 hours==小时 minutes==分钟 for new documents automatically==自动地对新文件 run this crawl once==爬取一次 >Query<==>查询< Query Type==查询类型 >Import<==>导入< Tag Manager==标签管理器 Bookmarks (user: #[user]# size: #[size]#)==书签(用户: #[user]# 大小: #[size]#) "Replace"=="替换" #----------------------------- #File: api/citation.html #--------------------------- Document Citations for==文档引用 List of other web pages with citations==其他网页与引文列表 Similar documents from different hosts:==来自不同主机的类似文件: #----------------------------- #File: api/table_p.html #--------------------------- Table Viewer==查看表格 #>PK<==>Primärschlüssel< "Edit Table"=="编辑表格" #----------------------------- #File: api/yacydoc.html #--------------------------- >Title<==>标题< >Author<==>作者< >Description<==>描述< >Subject<==>主题< >Publisher<==>发布者< >Contributor<==>贡献者< >Date<==>日期< >Type<==>类型< >Identifier<==>标识符< >Language<==>语言< >Load Date<==>加载日期< >Referrer Identifier<==>关联标识符< #>Referrer URL<==>Referrer URL< >Document size<==>文件大小< >Number of Words<==>关键字数目< #----------------------------- #File: compare_yacy.html #--------------------------- Websearch Comparison==网页搜索对比 Left Search Engine==左侧引擎 Right Search Engine==右侧引擎 Query==查询 "Compare"=="比较" Search Result==结果 #----------------------------- ### Subdirectory env/templates ### #File: env/templates/header.template #--------------------------- ### FIRST STEPS ### First Steps==第一步 Use Case & Account==用法 & 账号 Load Web Pages, Crawler==加载网页,爬虫 RAM/Disk Usage & Updates==内存/硬盘 使用 &更新 Load Web Pages==加载网页 ### MONITORING ### Target Analysis==目标分析 Re-Start<==重启< Shutdown<==关闭< Download YaCy==下载YaCy Search Interface==搜索界面 About This Page==关于此页 "Search..."=="搜索中..." Crawler Monitor==爬虫监视 System Status==系统状态 Peer-to-Peer Network==P2P网络 Index Browser==索引浏览器 You did not yet start a web crawl!==您还没开启网络爬虫! Advanced Crawler==高级爬虫 Index Export/Import==索引导出/导入 RAM/Disk Usage ==内存/硬盘使用  Administration== 管理 Toggle navigation==切换导航 Community (Web Forums)==社区(网络论坛) Project Wiki==项目百科 Portal Configuration==门户配置 Portal Design==门户设计 Ranking and Heuristics==排名和启发式 Content Semantic==内容语义 Process Scheduler==进程调度器 Network Access==网络访问 Confirm Re-Start==确认重启 Project Wiki<==项目百科< Git Repository==Git存储库 Bugtracker==错误追踪器 "You just started a YaCy peer!"==“您刚开始一个YaCy节点!” "As a first-time-user you see only basic functions. Set a use case or name your peer to see more options. Start a first web crawl to see all monitoring options."=="作为初次使用者,您只能看到基本的功能. 请命名您的Yacy节点来看更多的选项. 开始第一个网页爬取, 查看所有监视选项." "You did not yet start a web crawl!"=="您还未启动一个网络爬虫!" "You do not see all monitoring options here, because some belong to crawl result monitoring. Start a web crawl to see that!"=="您不会在这里看到所有的监控选项,因为有些属于爬取结果监控. 开始网络爬取看看!" System Administration==系统管理 Configuration==配置 Production==生产 >Administration<==>管理< Search Portal Integration==搜索门户集成 You just started a YaCy peer!==你刚开启了YaCy节点! As a first-time-user you see only basic functions. Set a use case or name your peer to see more options. Start a first web crawl to see all monitoring options.==作为初次使用者, 您只能看到基本的功能. 请命名您的Yacy节点来看更多的选项. 开始第一个网页爬取, 查看所有监视选项. You do not see all monitoring options here, because some belong to crawl result monitoring. Start a web crawl to see that!==您不会在这里看到所有的监控选项,因为有些属于爬取结果监控. 开始网络爬取看看! Use Case ==用法 "You do not see all monitoring options here=="您不会在这里看到所有的监控选项, 因为有些属于爬取结果监控. 开始网络爬取看看!" You do not see all monitoring options here==您不会在这里看到所有的监控选项, 因为有些属于爬取结果监控. 开始网络爬取看看! RAM/Disk Usage==内存/硬盘 使用 Use Case==用法 YaCy - Distributed Search Engine==YaCy - 分布式搜索引擎 ### SEARCH & BROWSE ### >Search==>搜索 Web Search==搜索网页 File Search==搜索文件 Search & Browse==搜索 & 浏览 Search Page==搜索网页 Rich Client Search==客户端搜索 Interactive local Search==本地交互搜索 Compare Search==对比搜索 Ranking Config==排名设置 >Surftips==>建议 Local Peer Wiki==本地Wiki >Bookmarks==>书签 >Help==>帮助 ### INDEX CONTROL ### Index Production==索引 Index Control==索引 控制 Index Creation==索引创建 Crawler Monitor==爬虫监视 Crawl Results==爬取结果 Index Administration==索引管理 Filter & Blacklists==过滤 & 黑名单 ### SEARCH INTEGRATION ### Search Integration==搜索集成 Search Portals==搜索主页 Customization==自定义 ### MONITORING ### Monitoring==监视 YaCy Network==YaCy网络 Web Visualization==网页元素外观 Access Tracker==访问跟踪器 #Server Log==Server Log >Messages==>消息 >Terminal==>终端 "New Messages"=="新消息" ### PEER CONTROL Peer Control==节点控制 Admin Console==管理控制台 >API Action Steering<==>API动作向导< Confirm Restart==确认重启 Re-Start==重启 Confirm Shutdown==确认关闭 >Shutdown==>关闭 ### THE PROJECT ### The Project==项目 Project Home==项目主页 #Deutsches Forum==Deutsches Forum English Forum==论坛 YaCy Project Wiki==YaCy项目Wiki # Development Change Log==Entwicklung Änderungshistorie amp;language=en==amp;language=cn Development Change Log==变更日志 Peer Statistics::YaCy Statistics==节点统计数据::YaCy数据 #----------------------------- #File: env/templates/metas.template #--------------------------- English, Englisch==English, Englisch #----------------------------- #File: env/templates/simpleheader.template #--------------------------- Project Wiki==项目百科 Search Interface==搜索界面 About This Page==关于此页 Bugtracker==Bug追踪器 Git Repository==Git存储库 Community (Web Forums)==社区(网络论坛) Download YaCy==下载YaCy Google Appliance API==Google设备API >Web Search<==>网页搜索< >File Search<==>文件搜索< >Compare Search<==>比较搜索< >Index Browser<==>索引浏览器< >URL Viewer<==>地址查看器< Example Calls to the Search API:==调用搜索API的示例: Administration »==管理 » Search Interfaces==搜索界面 Toggle navigation==切换导航 Solr Default Core==Solr默认核心 Solr Webgraph Core==Solr网页图形核心 Administration ==管理 Administration==管理 #Administration<==Administration< >Search Network<==>搜索网络< #Peer Owner Profile==节点所有者资料 Help / YaCy Wiki==帮助 / YaCy Wiki #----------------------------- #File: env/templates/submenuAccessTracker.template #--------------------------- Access Grid==访问网格 Incoming Requests Overview==传入请求概述 Incoming Requests Details==传入的请求详细信息 All Connections<==所有连接< Local Search<==本地搜索< Remote Search<==远端搜索< Cookie Menu==Cookie菜单 Incoming Cookies==传入 Cookies Outgoing Cookies==传出 Cookies Incoming==传入 Outgoing==传出 Access Tracker==访问跟踪器 Server Access==服务器访问 Overview==概述 #Details==Details Connections==连接 Local Search==本地搜索 Log==日志 Host Tracker==主机跟踪器 Remote Search==远端搜索 #----------------------------- #File: env/templates/submenuBlacklist.template #--------------------------- Content Control==内容控制 Filter & Blacklists==过滤 & 黑名单 Blacklist Administration==黑名单管理 Blacklist Cleaner==黑名单整理 Blacklist Test==黑名单测试 Import/Export==导入/导出 Index Cleaner==索引整理 #----------------------------- #File: env/templates/submenuComputation.template #--------------------------- >Application Status<==>应用程序状态< >Status<==>状态< System==系统 Thread Dump==线程转储 >Processes<==>流程< >Server Log<==>服务器日志< >Concurrent Indexing<==>并发索引< >Memory Usage<==>内存使用< >Search Sequence<==>搜索序列< >Messages<==>消息< >Overview<==>概述< >Incoming News<==>传入的新闻< >Processed News<==>处理的新闻< >Outgoing News<==>传出的新闻< >Published News<==>发布的新闻< >Community Data<==>社区数据< >Surftips<==>上网技巧< >Local Peer Wiki<==>本地节点百科 < UI Translations==用户界面翻译 >Published==>已发布的 >Processed==>加工的 >Outgoing==>传出的 >Incoming==>传入的 #----------------------------- #File: env/templates/submenuConfig.template #--------------------------- System Administration==系统管理 Viewer and administration for database tables==数据库表的查看与管理 Performance Settings of Busy Queues==繁忙队列的性能设置 #UNUSED HERE #Peer Administration Console==节点控制台 Status==状态 >Accounts==>账户 Network Configuration==网络设置 >Heuristics<==>触发式< Dictionary Loader==功能扩展 System Update==系统升级 >Performance==>性能 Advanced Settings==高级设置 Parser Configuration==解析配置 Local robots.txt==本地爬虫协议 Advanced Properties==高级设置 #----------------------------- #File: env/templates/submenuCrawlMonitor.template #--------------------------- Overview==概述 Receipts==回执 Queries==查询 DHT Transfer==DHT 传输 Proxy Use==代理使用 Local Crawling==本地爬取 Global Crawling==全球爬取 Surrogate Import==代理导入 Crawl Results==爬取结果 Crawler<==爬虫< Global==全球 robots.txt Monitor==爬虫协议监视器 Remote==远端 No-Load==空载 Processing Monitor==进程监视 Crawler Queues==爬虫队列 Loader<==加载器< Rejected URLs==已拒绝地址 >Queues<==>队列< Local<==本地< Crawler Steering==爬取向导 Scheduler and Profile Editor<==定时器与资料编辑器< #----------------------------- #File: env/templates/submenuCrawler.template #--------------------------- Load Web Pages==加载网页 Site Crawling==网站爬取 Parser Configuration==解析器配置 #----------------------------- #File: env/templates/submenuDesign.template #--------------------------- >Language<==>语言< Search Page Layout==搜索页面布局 Design==设计 >Appearance<==>外观< Customization==自定义 >Appearance==>外观 User Profile==用户资料 >Language==>语言 #----------------------------- #File: env/templates/submenuIndexControl.template #--------------------------- Index Administration==索引管理 URL Database Administration==地址数据库管理 Index Deletion==索引删除 Index Sources & Targets==索引来源&目标 Solr Schema Editor==Solr模式编辑器 Field Re-Indexing==字段重新索引 Reverse Word Index==反向字索引 Content Analysis==内容分析 Reverse Word Index Administration==详细关键字索引管理 URL References Database==地址关联关系数据库 URL Viewer==地址浏览 #----------------------------- #File: env/templates/submenuIndexCreate.template #--------------------------- Crawler/Spider<==爬虫/蜘蛛< Crawl Start (Expert)==爬取开始(专家模式) Network Scanner==网络扫描仪 Crawling of MediaWikis==MediaWikis爬取 Remote Crawling==远端爬取 Scraping Proxy==收割代理 >Autocrawl<==>自动爬取< Advanced Crawler==高级爬虫 >Crawling of phpBB3 Forums<==>phpBB3论坛爬取< Start a Web Crawl==开启网页爬取 Crawler Queues==爬虫队列 Index Creation==索引创建 Full Site Crawl==全站爬取 Sitemap Loader==网站地图加载 Crawl Start
(Expert)==开始爬取
(专家模式) Network
Scanner==网络
扫描仪 Crawling of==正在爬取 >phpBB3 Forums<==>phpBB3论坛< Content Import<==导入内容< Network Harvesting<==网络采集< Remote
Crawling==远端
爬取 Scraping
Proxy==收割
代理 Database Reader<==数据库读取< for phpBB3 Forums==对于phpBB3论坛 Dump Reader for==Dump阅读器为 #----------------------------- #File: env/templates/submenuIndexImport.template #--------------------------- >Content Export / Import<==>内容导出/导入< >Export<==>导出< >Internal Index Export<==>内部索引导出< >Import<==>导入< RSS Feed Importer==RSS订阅导入器 OAI-PMH Importer==OAI-PMH导入器 >Warc Importer<==>Warc导入器< >Database Reader<==>数据库阅读器< Database Reader for phpBB3 Forums==phpBB3论坛的数据库阅读器 Dump Reader for MediaWiki dumps==MediaWiki转储阅读器 #----------------------------- #File: env/templates/submenuMaintenance.template #--------------------------- RAM/Disk Usage & Updates==内存/硬盘 使用 & 更新 Web Cache==网页缓存 Download System Update==下载系统更新 >Performance<==>性能< RAM/Disk Usage==内存/硬盘 使用 #----------------------------- #File: env/templates/submenuPortalConfiguration.template #--------------------------- Generic Search Portal==通用搜索门户 User Profile==用户资料 Local robots.txt==本地爬虫协议 Portal Configuration==门户配置 Search Box Anywhere==随处搜索框 #----------------------------- #File: env/templates/submenuPublication.template #--------------------------- Publication==发布 Wiki==百科 Blog==博客 File Hosting==文件共享 #----------------------------- #File: env/templates/submenuRanking.template #--------------------------- Solr Ranking Config==Solr排名配置 >Heuristics<==>启发式< Ranking and Heuristics==排名与启发式 RWI Ranking Config==RWI排名配置 #----------------------------- #File: env/templates/submenuSemantic.template #--------------------------- Content Semantic==内容语义 >Automated Annotation<==>自动注释< Auto-Annotation Vocabulary Editor==自动注释词汇编辑器 Knowledge Loader==知识加载器 >Augmented Content<==>增强内容< Augmented Browsing==增强浏览 #----------------------------- #File: env/templates/submenuTargetAnalysis.template #--------------------------- Target Analysis==目标分析 Mass Crawl Check==大量爬取检查 Regex Test==正则表达式测试 #----------------------------- #File: env/templates/submenuUseCaseAccount.template #--------------------------- Use Case & Accounts==用法 & 账号 Use Case ==用法 Use Case==用法 Basic Configuration==基本设置 >Accounts<==>账户< Network Configuration==网络设置 #----------------------------- #File: env/templates/submenuWebStructure.template #--------------------------- Index Browser==索引浏览器 Web Visualization==网页元素外观 Web Structure==网页结构 Image Collage==图像拼贴 #----------------------------- #File: index.html #--------------------------- == YaCy '#[clientname]#': Search Page==YaCy '#[clientname]#': 搜索页面 >Search<==>搜索< Text==文本 Audio==音频 Images==图片 Video==视频 Applications==应用程序 more options...==更多设置... >ranking modifier<==>排名修改器< click on the red icon in the upper right after a search. this works good in combination with the==搜索后点击右上角的红色图标. 这个结合起来很好用 add search results from external opensearch systems==添加外部opensearch系统的搜索结果 only pages with <date> in content==仅内容包含<date>的页面 add search results from ==从中添加搜索结果 this works good in combination with the '/date' ranking modifier.==这与“/ date”排名修饰符结合使用效果很好. click on the red icon in the upper right after a search.==搜索后点击右上角的红色图标. only pages with ==仅内容包含 add search results from==从中添加搜索结果 "Search"=="搜索" advanced parameters==高级参数 Max. number of results==搜索结果最多有 Results per page==每个页面显示结果 Resource==资源 global==全球 >local==>本地 Global search is disabled because==全球搜索被禁用, 因为 DHT Distribution is==DHT分发被 Index Receive is==索引接收被 DHT Distribution and Index Receive are==DHT分发和索引接受被 disabled.#(==禁用.#( URL mask==URL过滤 restrict on==限制 show all==显示所有 Prefer mask==首选过滤 Constraints==约束 only index pages==仅索引页面 "authentication required"=="需要认证" Disable search function for users without authorization==禁止未授权用户搜索 Enable web search to everyone==允许所有人搜索 the peer-to-peer network==P2P网络 only the local index==仅本地索引 Query Operators==查询操作 restrictions==限制 only urls with the <phrase> in the url==仅包含<phrase>的URL only urls with extension==仅带扩展名的地址 only urls from host==仅来自主机的地址 only pages with as-author-anotated==仅作者授权页面 only pages from top-level-domains==仅来自顶级域名的页面 only resources from http or https servers==仅来自http/https服务器的资源 only resources from ftp servers==仅来自ftp服务器的资源 they are rare==很少 crawl them yourself==您需要自己爬取它们 only resources from smb servers==仅来自smb服务器的资源 Intranet Indexing must be selected==局域网索引必须被选中 only files from a local file system==仅来自本机文件系统的文件 ranking modifier==排名修改 sort by date==按日期排序 latest first==最新者居首 multiple words shall appear near==引用多个字 doublequotes==双引号 prefer given language==首选语言 an ISO 639-1 2-letter code==ISO 639-1 标准的双字母代码 heuristics==启发式 add search results from blekko==添加来自blekko的搜索结果 Search Navigation==搜索导航 keyboard shortcuts==快捷键 Access key modifier + n==访问键 modifier + n next result page==下一页 Access key modifier + p==访问键 modifier + p previous result page==上一页 automatic result retrieval==自动结果检索 browser integration==浏览集成 after searching, click-open on the default search engine in the upper right search field of your browser and select 'Add "YaCy Search.."'==搜索后, 点击浏览器右上方区域中的默认搜索引擎, 并选择'添加"YaCy"' search as rss feed==作为RSS-Feed搜索 click on the red icon in the upper right after a search. this works good in combination with the '/date' ranking modifier. See an==搜索后点击右上方的红色图标. 配合'/date'排名修改, 能取得较好效果. >example==>例 json search results==json搜索结果 for ajax developers: get the search rss feed and replace the '.rss' extension in the search result url with '.json'==对AJAX开发者: 获取搜索结果页的RSS-Feed, 并用'.json'替换'.rss'搜索结果链接中的扩展名 #----------------------------- #File: js/Crawler.js #--------------------------- "Continue this queue"=="继续队列" "Pause this queue"=="暂停队列" #----------------------------- #File: js/yacyinteractive.js #--------------------------- >total results==>全部结果  topwords:== 顶部: >Name==>名称 >Size==>大小 >Date==>日期 #----------------------------- #File: proxymsg/authfail.inc #--------------------------- Your Username/Password is wrong.==用户名/密码输入错误. Username==用户名 Password==密码 "login"=="登录" #----------------------------- #File: proxymsg/error.html #--------------------------- YaCy: Error Message==YaCy: 错误消息 request:==请求: unspecified error==未定义错误 not-yet-assigned error==未定义错误 You don't have an active internet connection. Please go online.==无网络链接, 请上线. Could not load resource. The file is not available.==无效文件, 加载资源失败. Exception occurred==异常发生 Generated #[date]# by==生成日期 #[date]# 由 #----------------------------- #File: proxymsg/proxylimits.inc #--------------------------- Your Account is disabled for surfing.==您的账户没有浏览权限. Your Timelimit (#[timelimit]# Minutes per Day) is reached.==您的账户时限(#[timelimit]# 分钟每天)已到. #----------------------------- #File: proxymsg/unknownHost.inc #--------------------------- The server==服务器 could not be found.==未找到. Did you mean:==是不是: #----------------------------- #File: sharedBlacklist_p.html #--------------------------- Shared Blacklist==共享黑名单 Add Items to Blacklist==添加条目到黑名单 Unable to store the items into the blacklist file:==不能存储条目到黑名单文件: #File Error! Wrong Path?==Datei Fehler! Falscher Pfad? YaCy-Peer "#[name]#" not found.==YaCy peer"#[name]#" 未找到. not found or empty list.==未找到或者列表为空. Wrong Invocation! Please invoke with==调用错误! 请使用配合 Blacklist source:==黑名单源: Blacklist target:==黑名单目的: Blacklist item==黑名单条目 "select all"=="全部选择" "deselect all"=="全部反选" value="add"==value="添加" #----------------------------- #File: terminal_p.html #--------------------------- YaCy Peer Live Monitoring Terminal==YaCy节点实时监控终端 YaCy System Terminal Monitor==YaCy系统终端监视器 #YaCy System Monitor==YaCy System Monitor Search Form==搜索页面 Crawl Start==开始爬取 Status Page==状态页面 Confirm Shutdown==确认关闭 ><Shutdown==><关闭程序 Event Terminal==事件终端 Image Terminal==图形终端 Domain Monitor==域监视器 "Loading Processing software..."=="正在载入软件..." This browser does not have a Java Plug-in.==此浏览器没有安装Java插件. Get the latest Java Plug-in here.==在此获取. Resource Monitor==资源监视器 Network Monitor==网络监视器 #----------------------------- #File: yacy/ui/index.html #--------------------------- About YaCy-UI==关于YaCy-UI Admin Console==管理控制台 "Bookmarks"=="书签" >Bookmarks==>书签 Server Log==服务器日志 #----------------------------- #File: yacy/ui/js/jquery-flexigrid.js #--------------------------- 'Displaying {from} to {to} of {total} items'=='显示 {from} 到 {to}, 总共 {total} 个条目' 'Processing, please wait ...'=='正在处理, 请稍候...' 'No items'=='无条目' #----------------------------- #File: yacy/ui/js/jquery-ui-1.7.2.min.js #--------------------------- Loading…==正在加载… #----------------------------- #File: yacy/ui/js/jquery.ui.all.min.js #--------------------------- Loading…==正在加载… #----------------------------- #File: yacy/ui/sidebar/sidebar_1.html #--------------------------- YaCy P2P Websearch==YaCy P2P搜索 "Search"=="搜索" >Text==>文本 >Images==>图片 >Audio==>音频 >Video==>视频 >Applications==>应用 Search term:==搜索条目: # do not translate class="help" which only has technical html semantics alt="help"==alt="帮助" title="help"==title="帮助" Resource/Network:==资源/网络: freeworld==自由世界 local peer==本地节点 >bookmarks==>书签 sciencenet==ScienceNet >Language:==>语言: any language==任意语言 Bookmark Folders==书签目录 #----------------------------- #File: yacy/ui/sidebar/sidebar_2.html #--------------------------- Bookmark Tags<==标签< Search Options==搜索设置 Constraint:==约束: all pages==所有页面 index pages==索引页面 URL mask:==URL过滤: Prefer mask:==首选过滤: Bookmark TagCloud==标签云 Topwords<==顶部< alt="help"==alt="帮助" title="help"==title="帮助" #----------------------------- #File: yacy/ui/yacyui-admin.html #--------------------------- Peer Control==节点控制 "Login"=="登录" Themes==主题 Messages==消息 Re-Start==重启 Shutdown==关闭 Web Indexing==网页索引 Crawl Start==开始爬取 Monitoring==监视 YaCy Network==YaCy网络 >Settings==>设置 "Basic Settings"=="基本设置" Basic== 基本 Accounts==账户 "Network"=="网络" Network== 网络 "Advanced Settings"=="高级设置" Advanced== 高级 "Update Settings"=="升级设置" Update== 升级 >YaCy Project==>YaCy项目 "YaCy Project Home"=="YaCy项目主页" Project== 项目 "YaCy Forum"=="YaCy论坛" "Help"=="帮助" #----------------------------- #File: yacy/ui/yacyui-bookmarks.html #--------------------------- 'Add'=='添加' 'Crawl'=='爬取' 'Edit'=='编辑' 'Delete'=='删除' 'Rename'=='重命名' 'Help'=='帮助' "YaCy Bookmarks"=="YaCy书签" 'Public'=='公有' 'Title'=='题目' 'Tags'=='标签' 'Folders'=='目录' 'Date'=='日期' #----------------------------- #File: yacy/ui/yacyui-welcome.html #--------------------------- >Overview==>概述 YaCy-UI is going to be a JavaScript based client for YaCy based on the existing XML and JSON API.==YaCy-UI 是基于JavaScript的YaCy客户端, 它使用当前的XML和JSON API. YaCy-UI is at most alpha status, as there is still problems with retriving the search results.==YaCy-UI 尚在测试阶段, 所以在搜索时会有部分问题出现. I am currently changing the backend to a more application friendly format and getting good results with it (I will check that in some time after the stable release 0.7).==目前我正在修改程序后台, 以让其更加友善和搜索到更合适的结果(我会在稳定版0.7后改善此类问题). For now have a look at the bookmarks, performance has increased significantly, due to the use of JSON and Flexigrid!==就目前来说, 由于使用JSON和Flexigrid, 性能已获得显著提升! #----------------------------- #File: yacyinteractive.html #--------------------------- YaCy Interactive Search==YaCy交互搜索 This search result can also be retrieved as RSS/opensearch output.==此搜索结果能以RSS/opensearch形式表示. The query format is similar to SRU.==请求的格式与SRU相似. Click the API icon to see an example call to the search rss API.==点击API图标查看示例. To see a list of all APIs, please visit the==查看所有API, 请访问 API wiki page==API 百科页面 loading from local index...==从本地索引加载... e="Search"==e="搜索" "Search..."=="搜索中..." #----------------------------- #File: yacysearch.html #--------------------------- # Do not translate id="search" and rel="search" which only have technical html semantics Search Page==搜索页面 This search result can also be retrieved as RSS/opensearch output.==此搜索结果能以RSS/opensearch形式表示. "search again"=="再次搜索" Illegal URL mask:==非法网址掩码: (not a valid regular expression), mask ignored.==(不是一个有效的正则表达式),掩码忽略. Illegal prefer mask:==Illegal prefer mask: Did you mean:==您想搜: The following words are stop-words and had been excluded from the search:==以下关键字是休止符, 已从搜索中排除: No Results.==未找到. length of search words must be at least 1 character==搜索文本最少一个字符 Searching the web with this peer is disabled for unauthorized users. Please==对于未经授权的用户,将禁用使用此节点搜索Web。 请 >log in<==>登录< as administrator to use the search function==作为管理员使用搜索功能 Location -- click on map to enlarge==位置 -- 点击地图放大 Map (c) by <==Map (c) by < and contributors, CC-BY-SA==and contributors, CC-BY-SA >Media<==>媒体< > of==> 共 > local,==> 本地, remote from==远端 来自 YaCy peers).==YaCy 节点). #----------------------------- #File: yacysearchitem.html #--------------------------- "bookmark"=="书签" "recommend"=="推荐" "delete"=="删除" Pictures==图片 #----------------------------- #File: yacysearchtrailer.html #--------------------------- show search results for "#[query]#" on map==在地图上显示 "#[query]#" 的搜索结果 >Provider==>提供者 >Name Space==>命名空间 >Author==>作者 >Filetype==>文件类型 >Language==>语言    Peer-to-Peer    ==   P2P         Stealth Mode   ==    隐身 模式    "Privacy"=="隐私" Context Ranking==按内容排名 Sort by Date==按日期排序 Documents==文件 Images==图片 "Your search is done using peers in the YaCy P2P network."=="您的搜索是靠YaCy P2P网络中的节点完成的。" "You can switch to 'Stealth Mode' which will switch off P2P, giving you full privacy. Expect less results then, because then only your own search index is used."=="您可以切换到'隐形模式',这将关闭P2P,给你完全的隐私。期待较少的结果,因为那时只有您自己的搜索索引被使用。" "Your search is done using only your own peer, locally."=="你的搜索是靠在本地的YaCy节点完成的。" "You can switch to 'Peer-to-Peer Mode' which will cause that your search is done using the other peers in the YaCy network."=="您可以切换到'P2P',这将让您的搜索使用YaCy网络中的YaCy节点。" >Documents==>文件 >Images==>图片 #----------------------------- # EOF