Adam Fedor wrote:
I've implemented support for NSBitmapImageRep's
by changing the backend method NSReadPixel: to GSReadRect: (the
NSReadPixel function could be implemented in terms of GSReadRect
I think replacing NSReadPixel: with GSReadRect: is a bad idea for a
couple of reasons:
* The gsMethodTable struct is essentially public since it's used by
inlined functions in user code. Thus, removing entries (or inserting
entries before the last entry) makes new versions of GNUstep binary
incompatible with old versions.
* Implementing NSReadPixel: in terms of GSReadRect: is possible, but
trivial. To be correct, you'd need to read a tiny rectangle around the
point and hope that you get just one pixel back. If you don't, you need
to shrink the rectangle and try again.
* By making NSReadPixel a function in -gui, it isn't possible for
backends to implement it more efficiently. If it remains as an entry in
gsMethodTable, we can provide a default implementation in
NSGraphicsContext and backends can optionally override it with a more
Thus, I think we should keep NSReadPixel: in gsMethodTable and add
GSReadRect: (at the end of the struct to prevent binary
I implemented it in the xlib backend. Perhaps Alex could give me a
of how to do it in the art backend (or maybe do a quick hack himself
I can implement it, but since you've chosen to make this a new operator
instead of using readimage/sizeimage, I'll need documentation for it.
Read raw pixels from the device and return the information as a bitmap.
Pixels are read from the smallest device-pixel aligned rectangle
containing rect (defined in the current graphics state and clipped to
the current window, but not against the clipping path). If the
device rectangle is degenerate, Size will be (0,0) and Data will be
but the other entries in the dictionary will be filled in.
If the device does not support the operation, returns nil.
The returned dictionary contains at least the following keys:
Data: An NSData-instance with the image data.
Size: An NSValue/NSSize with the size in pixels of the returned image
BitsPerSample: An NSValue/unsigned int.
SamplesPerPixel: An NSValue/unsigned int.
ColorSpace: An NSString with the name of the color space the data is
HasAlpha: An NSValue/unsigned int. 0 if the returned image does not
an alpha channel, 1 if it does.
Matrix: An NSAffineTransform-instance that contains the transform
between current user space and image space for this image.
(Document semantics modeled after readimage/sizeimage, expanded
acronyms, remove redundant 'Image', make details clear.) I'm not sure
whether we should include 'BitsPerPixel' and 'IsPlanar' keys to make
sure we cover all formats. There are cases where the device formats
need these, but the backend has to make a copy of that data anyway, so
it might as well make the data non-planar and remove dead bits while
it's at it.
Is this OK? If so, I can update the existing code and implement it in