[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC] Reading images

From: Alexander Malmberg
Subject: Re: [RFC] Reading images
Date: Sun, 21 Sep 2003 13:20:39 +0200

Adam Fedor wrote:
> I've implemented support for NSBitmapImageRep's initWithFocusedViewRect:
> by changing the backend method NSReadPixel: to GSReadRect: (the
> NSReadPixel function could be implemented in terms of GSReadRect
> anyway).

I think replacing NSReadPixel: with GSReadRect: is a bad idea for a
couple of reasons:

* The gsMethodTable struct is essentially public since it's used by
inlined functions in user code. Thus, removing entries (or inserting
entries before the last entry) makes new versions of GNUstep binary
incompatible with old versions.

* Implementing NSReadPixel: in terms of GSReadRect: is possible, but not
trivial. To be correct, you'd need to read a tiny rectangle around the
point and hope that you get just one pixel back. If you don't, you need
to shrink the rectangle and try again.

* By making NSReadPixel a function in -gui, it isn't possible for
backends to implement it more efficiently. If it remains as an entry in
gsMethodTable, we can provide a default implementation in
NSGraphicsContext and backends can optionally override it with a more
efficient version.

Thus, I think we should keep NSReadPixel: in gsMethodTable and add
GSReadRect: (at the end of the struct to prevent binary

> I implemented it in the xlib backend.  Perhaps Alex could give me a hint
> of how to do it in the art backend (or maybe do a quick hack himself :-)

I can implement it, but since you've chosen to make this a new operator
instead of using readimage/sizeimage, I'll need documentation for it. :)

Something like:

<cut here>
Read raw pixels from the device and return the information as a bitmap.
Pixels are read from the smallest device-pixel aligned rectangle
containing rect (defined in the current graphics state and clipped to
the current window, but not against the clipping path). If the resulting
device rectangle is degenerate, Size will be (0,0) and Data will be nil,
but the other entries in the dictionary will be filled in.

If the device does not support the operation, returns nil.

The returned dictionary contains at least the following keys:

Data: An NSData-instance with the image data.

Size: An NSValue/NSSize with the size in pixels of the returned image

BitsPerSample: An NSValue/unsigned int.

SamplesPerPixel: An NSValue/unsigned int.

ColorSpace: An NSString with the name of the color space the data is in.

HasAlpha: An NSValue/unsigned int. 0 if the returned image does not have
an alpha channel, 1 if it does.

Matrix: An NSAffineTransform-instance that contains the transform
between current user space and image space for this image.
</cut here>

(Document semantics modeled after readimage/sizeimage, expanded
acronyms, remove redundant 'Image', make details clear.) I'm not sure
whether we should include 'BitsPerPixel' and 'IsPlanar' keys to make
sure we cover all formats. There are cases where the device formats will
need these, but the backend has to make a copy of that data anyway, so
it might as well make the data non-planar and remove dead bits while
it's at it.

Is this OK? If so, I can update the existing code and implement it in

To those who want to use it, I want to warn that the results of this are
fairly device-dependent. Doing image manipulation this way is not a good
idea. Even just drawing an image to an off-screen window and reading it
back might return different data for a number of reasons, of which the
most common will be:

* The device might not be 72 dpi (in that case, the size in pixels of
the returned image will not be the same as the original image).
* The device might not have as much color resolution as the image (eg.
if it's 16-bpp, all your color values will be truncated to 5 or 6 bits).

- Alexander Malmberg

reply via email to

[Prev in Thread] Current Thread [Next in Thread]