yacy_search_server/htroot/ViewImage.java

183 lines
7.2 KiB
Java
Raw Normal View History

// ViewImage.java
// -----------------------
// part of YaCy
// (C) by Michael Peter Christen; mc@yacy.net
// first published on http://www.anomic.de
// Frankfurt, Germany, 2006
// created 03.04.2006
//
// This program is free software; you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation; either version 2 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this program; if not, write to the Free Software
// Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
import java.awt.Container;
import java.awt.Image;
import java.awt.MediaTracker;
import java.io.File;
*) plasmaHTCache: - method loadResourceContent defined as deprecated. Please do not use this function to avoid OutOfMemory Exceptions when loading large files - new function getResourceContentStream to get an inputstream of a cache file - new function getResourceContentLength to get the size of a cached file *) httpc.java: - Bugfix: resource content was loaded into memory even if this was not requested *) Crawler: - new option to hold loaded resource content in memory - adding option to use the worker class without the worker pool (needed by the snippet fetcher) *) plasmaSnippetCache - snippet loader does not use a crawl-worker from pool but uses a newly created instance to avoid blocking by normal crawling activity. - now operates on streams instead of byte arrays to avoid OutOfMemory Exceptions when operating on large files - snippet loader now forces the crawl-worker to keep the loaded resource in memory to avoid IO *) plasmaCondenser: adding new function getWords that can directly operate on input streams *) Parsers - keep resource in memory whenever possible (to avoid IO) - when parsing from stream the content length must be passed to the parser function now. this length value is needed by the parsers to decide if the parsed resource content is to large to hold it in memory and must be stored to file - AbstractParser.java: new function to pass the contentLength of a resource to the parsers git-svn-id: https://svn.berlios.de/svnroot/repos/yacy/trunk@2701 6c8d7289-2bf4-0310-a012-ef5d649a1542
2006-10-03 13:05:48 +02:00
import java.io.IOException;
import java.io.InputStream;
import java.net.MalformedURLException;
import java.util.HashMap;
import de.anomic.http.httpHeader;
import de.anomic.http.httpRequestHeader;
import de.anomic.kelondro.util.FileUtils;
import de.anomic.plasma.plasmaSwitchboard;
import de.anomic.search.SnippetCache;
import de.anomic.server.serverObjects;
import de.anomic.server.serverSwitch;
import de.anomic.yacy.yacyURL;
import de.anomic.yacy.logging.Log;
import de.anomic.ymage.ymageImageParser;
public class ViewImage {
private static HashMap<String, Image> iconcache = new HashMap<String, Image>();
private static String defaulticon = "htroot/env/grafics/dfltfvcn.ico";
private static byte[] defaulticonb;
static {
try {
defaulticonb = FileUtils.read(new File(defaulticon));
} catch (final IOException e) {
}
}
removed the indexing queue. This queue was superfluous since the introduction of the blocking queues last year, where documents are parsed, analysed and stored in the index with concurrency. - The indexing queue was a historic data structure that was introduced at the very beginning at the project as a part of the switchboard organisation object structure. Without the indexing queue the switchboard queue becomes also superfluous. It has been removed as well. - Removing the switchboard queue requires that all servlets are called without a opaque generic ('<?>'). That caused that all serlets had to be modified. - Many servlets displayed the indexing queue or the size of that queue. In the past months the indexer was so fast that mostly the indexing queue appeared empty, so there was no use of it any more. Because the queue has been removed, the display in the servlets had also to be removed. - The surrogate work task had been a part of the indexing queue control structure. Without the indexing queue the surrogates needed its own task management. That has been integrated here. - Because the indexing queue had a special queue entry object and properties attached to this object, the propterties had to be moved to the queue entry object which is part of the new indexing queue withing the blocking queue, the Response Object. That object has now also the new properties of the removed indexing queue entry object. git-svn-id: https://svn.berlios.de/svnroot/repos/yacy/trunk@6225 6c8d7289-2bf4-0310-a012-ef5d649a1542
2009-07-17 15:59:21 +02:00
public static Image respond(final httpRequestHeader header, final serverObjects post, final serverSwitch env) {
final plasmaSwitchboard sb = (plasmaSwitchboard)env;
// the url to the image can be either submitted with an url in clear text, or using a license key
// if the url is given as clear text, the user must be authorized as admin
// the license can be used also from non-authorized users
String urlString = post.get("url", "");
final String urlLicense = post.get("code", "");
final boolean auth = (header.get(httpHeader.CONNECTION_PROP_CLIENTIP, "")).equals("localhost") || sb.verifyAuthentication(header, true); // handle access rights
yacyURL url = null;
if ((urlString.length() > 0) && (auth)) try {
url = new yacyURL(urlString, null);
} catch (final MalformedURLException e1) {
url = null;
}
if ((url == null) && (urlLicense.length() > 0)) {
url = sb.licensedURLs.releaseLicense(urlLicense);
urlString = (url == null) ? null : url.toNormalform(true, true);
}
int width = post.getInt("width", 0);
int height = post.getInt("height", 0);
int maxwidth = post.getInt("maxwidth", 0);
int maxheight = post.getInt("maxheight", 0);
final int timeout = post.getInt("timeout", 5000);
*) plasmaHTCache: - method loadResourceContent defined as deprecated. Please do not use this function to avoid OutOfMemory Exceptions when loading large files - new function getResourceContentStream to get an inputstream of a cache file - new function getResourceContentLength to get the size of a cached file *) httpc.java: - Bugfix: resource content was loaded into memory even if this was not requested *) Crawler: - new option to hold loaded resource content in memory - adding option to use the worker class without the worker pool (needed by the snippet fetcher) *) plasmaSnippetCache - snippet loader does not use a crawl-worker from pool but uses a newly created instance to avoid blocking by normal crawling activity. - now operates on streams instead of byte arrays to avoid OutOfMemory Exceptions when operating on large files - snippet loader now forces the crawl-worker to keep the loaded resource in memory to avoid IO *) plasmaCondenser: adding new function getWords that can directly operate on input streams *) Parsers - keep resource in memory whenever possible (to avoid IO) - when parsing from stream the content length must be passed to the parser function now. this length value is needed by the parsers to decide if the parsed resource content is to large to hold it in memory and must be stored to file - AbstractParser.java: new function to pass the contentLength of a resource to the parsers git-svn-id: https://svn.berlios.de/svnroot/repos/yacy/trunk@2701 6c8d7289-2bf4-0310-a012-ef5d649a1542
2006-10-03 13:05:48 +02:00
// getting the image as stream
Image scaled = iconcache.get(urlString);
if (scaled == null) {
Object[] resource = null;
if (url != null) try {
resource = SnippetCache.getResource(url, true, timeout, false, true);
} catch (IOException e) {
Log.logWarning("ViewImage", "cannot load: " + e.getMessage());
}
byte[] imgb = null;
if (resource == null) {
if (urlString.endsWith(".ico")) {
// load default favicon dfltfvcn.ico
if (defaulticonb == null) try {
imgb = FileUtils.read(new File(sb.getRootPath(), defaulticon));
} catch (final IOException e) {
return null;
} else {
imgb = defaulticonb;
}
} else {
return null;
}
} else {
final InputStream imgStream = (InputStream) resource[0];
if (imgStream == null) return null;
// read image data
try {
imgb = FileUtils.read(imgStream);
} catch (final IOException e) {
return null;
} finally {
try {
imgStream.close();
} catch (final Exception e) {/* ignore this */}
}
}
// read image
final Image image = ymageImageParser.parse(urlString, imgb);
if (image == null || (auth && (width == 0 || height == 0) && maxwidth == 0 && maxheight == 0)) return image;
// find original size
final int h = image.getHeight(null);
final int w = image.getWidth(null);
// in case of not-authorized access shrink the image to prevent
// copyright problems, so that images are not larger than thumbnails
if (auth) {
maxwidth = (maxwidth == 0) ? w : maxwidth;
maxheight = (maxheight == 0) ? h : maxheight;
} else if ((w > 16) || (h > 16)) {
maxwidth = (int) Math.min(64.0, w * 0.6);
maxheight = (int) Math.min(64.0, h * 0.6);
} else {
maxwidth = 16;
maxheight = 16;
}
// calculate width & height from maxwidth & maxheight
if ((maxwidth < w) || (maxheight < h)) {
// scale image
final double hs = (w <= maxwidth) ? 1.0 : ((double) maxwidth) / ((double) w);
final double vs = (h <= maxheight) ? 1.0 : ((double) maxheight) / ((double) h);
double scale = Math.min(hs, vs);
if (!auth) scale = Math.min(scale, 0.6); // this is for copyright purpose
if (scale < 1.0) {
width = Math.max(1, (int) (w * scale));
height = Math.max(1, (int) (h * scale));
} else {
width = Math.max(1, w);
height = Math.max(1, h);
}
// compute scaled image
scaled = ((w == width) && (h == height)) ? image : image.getScaledInstance(width, height, Image.SCALE_AREA_AVERAGING);
final MediaTracker mediaTracker = new MediaTracker(new Container());
mediaTracker.addImage(scaled, 0);
try {mediaTracker.waitForID(0);} catch (final InterruptedException e) {}
} else {
// do not scale
width = w;
height = h;
scaled = image;
}
if ((height == 16) && (width == 16) && (resource != null)) {
// this might be a favicon, store image to cache for faster re-load later on
iconcache.put(urlString, scaled);
}
}
return scaled;
}
}