libcdio-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Libcdio-devel] Read Sub-channel changes


From: Thomas Schmitt
Subject: Re: [Libcdio-devel] Read Sub-channel changes
Date: Tue, 31 May 2016 23:13:23 +0200

Hi,

i wrote:
> >  - cd-info should report Sense Codes of unexpectedly failed SCSI commands.

Rocky Bernstein wrote:
> Suggest some code to detect errors.

The Linux driver function already detects them:

static driver_return_code_t
run_mmc_cmd_linux(void *p_user_data,
    ...
    /* Record SCSI sense reply for API call mmc_last_cmd_sense().
    */
    if (sense.additional_sense_len) {
        /* SCSI Primary Command standard
           SPC 4.5.3, Table 26: 252 bytes legal, 263 bytes possible */
        int sense_size = sense.additional_sense_len + 8;

        if (sense_size > sizeof(sense))
            sense_size = sizeof(sense);
        memcpy((void *) p_env->gen.scsi_mmc_sense, &sense, sense_size);
        p_env->gen.scsi_mmc_sense_valid = sense_size;
    }

So we have to find an appropriate place where to call mmc_last_cmd_sense()
and to report the error indicators in human readable form.

In libburn i put such messages into a fifo where the application can
fetch them and decide whether to display them or not by a "severity" value
which was attributed to the message by libburn (e.g. "FATAL", "WARNING",
"DEBUG").

What's libcdio's general strategy for messages from library to application ?

-------------------------------------------------------------------------

When interpreting the sense data, i see a model problem in
  ./include/cdio/mmc.h , typedef struct cdio_mmc_request_sense

The Linux kernel injects synthetic sense data in "Descriptor format"
(SPC-3, table 13) where KEY is in bytes[1], ASC in bytes[2], ASCQ in
bytes[3], rather than the "Fixed format" (SPC-3, table 26) which is
issued by drives. In fixed format, KEY is in bytes[2], ASC in bytes[12],
ASCQ in bytes[13].
Further there are no FILEMARK, EOM, and ILI bits in descriptor format.

Descriptor format is identified by bytes[0] == 0x72 || bytes[0] == 0x73.
Fixed format by bytes[0] == 0x70 || bytes[0] == 0x71.

One would have to define a union, i guess, and interpret .error_code
to decide which alternative layout to use.
But what to do about the missing .filemark, .eom, .ili ?

Can we deprecate the cdio_mmc_request_sense_t interface and access the
meaningful parameters by API calls instead ?
In that case we could instruct applications to check for
  (cdio_mmc_request_sense_t.error_code & 2)
which will be handled only by the API calls.
If bit 1 is not set, then the cdio_mmc_request_sense_t interface is valid.


Have a nice day :)

Thomas




reply via email to

[Prev in Thread] Current Thread [Next in Thread]