qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v3 2/2] target-mips: Misaligned memory accesses


From: Yongbok Kim
Subject: Re: [Qemu-devel] [PATCH v3 2/2] target-mips: Misaligned memory accesses for MSA
Date: Thu, 14 May 2015 10:00:15 +0100
User-agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:31.0) Gecko/20100101 Thunderbird/31.6.0

On 13/05/2015 20:28, Richard Henderson wrote:
> On 05/13/2015 08:37 AM, Yongbok Kim wrote:
>> +static inline void ensure_atomic_msa_block_access(CPUMIPSState *env,
>> +                                                  target_ulong addr,
>> +                                                  int rw,
>> +                                                  int mmu_idx)
>>  {
>> +#if !defined(CONFIG_USER_ONLY)
>> +#define MSA_PAGESPAN(x) (unlikely((((x) & ~TARGET_PAGE_MASK)                
>> \
>> +                                   + MSA_WRLEN/8 - 1) >= TARGET_PAGE_SIZE))
>> +    CPUState *cs = CPU(mips_env_get_cpu(env));
>> +    target_ulong page_addr;
>>  
>> +    if (MSA_PAGESPAN(addr)) {
>> +        /* first page */
>> +        tlb_fill(cs, addr, rw, mmu_idx, 0);
>> +        /* second page */
>> +        page_addr = (addr & TARGET_PAGE_MASK) + TARGET_PAGE_SIZE;
>> +        tlb_fill(cs, page_addr, rw, mmu_idx, 0);
>>      }
>> +#endif
>>  }
> This doesn't do quite what you think it does.  It does trap if the page isn't
> mapped at all, but it doesn't trap if e.g. rw is set and the page is 
> read-only.
> That requires a subsequent check for what permissions were installed by
> tlb_set_page.

I must double check the behaviour but please note that here we are filling 
qemu's tlb entries
according to target's tlb entries. Therefore permission issue would be cleared.
I agree with your comment from later email that for the load this is too much 
as all load can
be issued and storing into the vector register can be followed.
I wasn't sure that because this tlb filling is happening only if an access is 
crossing the page boundary.

> I had thought there was a way to look this up besides duplicating the code in
> softmmu_template.h, but I suppose that's in a patch set that never made it in.
>
>
>> +    if (unlikely(addr & ((1 << DF) - 1))) {                             \
>> +        /* work-around for misaligned accesses */                       \
>> +        for (i = 0; i < DF_ELEMENTS(DF_BYTE); i++) {                    \
>> +            pwd->b[i] = do_lbu(env, addr + (i << DF_BYTE), mmu_idx);    \
>> +        }                                                               \
>> +        correct_vector_endianness_ ## TYPE(pwd, pwd);                   \
> Why byte accesses?  The softmmu helpers are guaranteed to support 
> misalignment.
The reason to use byte-to-byte access here is because we need to distinguish 
misaligned ok instructions
and not. MIPS R5 doesn't support misaligned while MSA does. In the 
do_unaligned_access(),
it is quite hard to find the information out. This was the trigger of your 
patch.
Since I haven't got an indication of your work, I took the work-around here to 
avoid the problem.

> As an aside, consider moving away from
>
> #define HELPER_LD(name, insn, type)                                     \
> static inline type do_##name(CPUMIPSState *env, target_ulong addr,      \
>                              int mem_idx)                               \
> {                                                                       \
>     switch (mem_idx)                                                    \
>     {                                                                   \
>     case 0: return (type) cpu_##insn##_kernel(env, addr); break;        \
>     case 1: return (type) cpu_##insn##_super(env, addr); break;         \
>     default:                                                            \
>     case 2: return (type) cpu_##insn##_user(env, addr); break;          \
>     }                                                                   \
> }
>
> to using helper_ret_*_mmu directly.  Which allows you to specify the mmu_idx
> directly rather than bouncing around different thunks.  It also allows you to
> pass in GETRA(), which would allow these helpers to use cpu_restore_state on
> faults.
>
Agreed.
> r~

Regards,
Yongbok



reply via email to

[Prev in Thread] Current Thread [Next in Thread]