qemu-s390x
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v5 18/18] s390x: pv: Add dump support


From: Janosch Frank
Subject: Re: [PATCH v5 18/18] s390x: pv: Add dump support
Date: Thu, 11 Aug 2022 15:03:52 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.9.0

On 8/11/22 14:11, Janosch Frank wrote:
Sometimes dumping a guest from the outside is the only way to get the
data that is needed. This can be the case if a dumping mechanism like
KDUMP hasn't been configured or data needs to be fetched at a specific
point. Dumping a protected guest from the outside without help from
fw/hw doesn't yield sufficient data to be useful. Hence we now
introduce PV dump support.

The PV dump support works by integrating the firmware into the dump
process. New Ultravisor calls are used to initiate the dump process,
dump cpu data, dump memory state and lastly complete the dump process.
The UV calls are exposed by KVM via the new KVM_PV_DUMP command and
its subcommands. The guest's data is fully encrypted and can only be
decrypted by the entity that owns the customer communication key for
the dumped guest. Also dumping needs to be allowed via a flag in the
SE header.

On the QEMU side of things we store the PV dump data in the newly
introduced architecture ELF sections (storage state and completion
data) and the cpu notes (for cpu dump data).

Users can use the zgetdump tool to convert the encrypted QEMU dump to an
unencrypted one.

Signed-off-by: Janosch Frank <frankja@linux.ibm.com>


Seems like I forgot to amend this commit with the naming changes before sending:

diff --git i/target/s390x/arch_dump.c w/target/s390x/arch_dump.c
index 5e8e03d536..233f23c071 100644
--- i/target/s390x/arch_dump.c
+++ w/target/s390x/arch_dump.c
@@ -286,14 +286,14 @@ int s390_cpu_write_elf64_note(WriteCoreDumpFunction f, CPUState *cs,
 }

 /* PV dump section size functions */
-static uint64_t get_dump_stor_state_size_from_len(uint64_t len)
+static uint64_t get_stor_state_size_from_len(uint64_t len)
 {
     return (len / (1 << 20)) * kvm_s390_pv_dmp_get_size_stor_state();
 }

 static uint64_t get_size_stor_state(DumpState *s)
 {
-    return get_dump_stor_state_size_from_len(s->total_size);
+    return get_stor_state_size_from_len(s->total_size);
 }

 static uint64_t get_size_complete(DumpState *s)
@@ -316,7 +316,8 @@ static int get_data_complete(DumpState *s, uint8_t *buff)
     return rc;
 }

-static int dump_mem(DumpState *s, uint64_t gaddr, uint8_t *buff, uint64_t buff_len) +static int get_stor_state_block(DumpState *s, uint64_t gaddr, uint8_t *buff,
+                                uint64_t buff_len)
 {
     /* We need the gaddr + len and something to write to */
     if (!pv_dump_initialized) {
@@ -325,7 +326,7 @@ static int dump_mem(DumpState *s, uint64_t gaddr, uint8_t *buff, uint64_t buff_l
     return kvm_s390_dump_mem(gaddr, buff_len, buff);
 }

-static int get_data_mem(DumpState *s, uint8_t *buff)
+static int get_store_state(DumpState *s, uint8_t *buff)
 {
     int64_t memblock_size, memblock_start;
     GuestPhysBlock *block;
@@ -341,9 +342,9 @@ static int get_data_mem(DumpState *s, uint8_t *buff)
memblock_size = dump_filtered_memblock_size(block, s->filter_area_begin,

s->filter_area_length);

-        off = get_dump_stor_state_size_from_len(block->target_start);
-        dump_mem(s, block->target_start, buff + off,
-                 get_dump_stor_state_size_from_len(memblock_size));
+        off = get_stor_state_size_from_len(block->target_start);
+        get_stor_state_block(s, block->target_start, buff + off,
+                             get_stor_state_size_from_len(memblock_size));
     }

     return 0;
@@ -354,7 +355,7 @@ struct sections {
     int (*sections_contents_func)(DumpState *s, uint8_t *buff);
     char sctn_str[12];
 } sections[] = {
-    { get_size_stor_state, get_data_mem, "pv_mem_meta"},
+    { get_size_stor_state, get_store_state, "pv_mem_meta"},
     { get_size_complete, get_data_complete, "pv_compl"},
     {NULL , NULL, ""}
 };



reply via email to

[Prev in Thread] Current Thread [Next in Thread]