[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] general question

From: 吴晓琳
Subject: Re: [Qemu-devel] general question
Date: Fri, 1 Jun 2012 15:11:28 +0800 (CST)

My code calls the following macro, but nothing happens.

#define invlpg_asm(pageaddr) do{\
asm volatile("invlpg %0\n\t" \
:"m"(pageaddr)); \

pageaddr is the virtual address that needs to be invalidated.

It is said that cup before i486 does not support "invlpg", how can I know the cup
model of my emulator.

Does my code go wrong, or the cpu of my emulator fail to support this instruction? 

--- 12年5月31日,周四, 陳韋任 (Wei-Ren Chen) <address@hidden> 写道:

发件人: 陳韋任 (Wei-Ren Chen) <address@hidden>
主题: Re: [Qemu-devel] general question
收件人: "Max Filippov" <address@hidden>
抄送: "���f任 (Wei-Ren Chen)" <address@hidden>, "吴晓琳" <address@hidden>, address@hidden
日期: 2012年5月31日,周四,下午5:21

> Hmmm, does it?
> void helper_invlpg(target_ulong addr)
> {
>     helper_svm_check_intercept_param(SVM_EXIT_INVLPG, 0);
>     tlb_flush_page(env, addr);
> }

  I would be wrong, so let the code speak. ;)

void tlb_flush_page(CPUArchState *env, target_ulong addr)
    if ((addr & env->tlb_flush_mask) == env->tlb_flush_addr) {
        tlb_flush(env, 1); --- (1)

    ... snip ...

    addr &= TARGET_PAGE_MASK;
    i = (addr >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1);
    for (mmu_idx = 0; mmu_idx < NB_MMU_MODES; mmu_idx++) {
        tlb_flush_entry(&env->tlb_table[mmu_idx][i], addr);

    tb_flush_jmp_cache(env, addr);

  The comment of tlb_flush (1) says,

    QEMU doesn't currently implement a global/not-global flag
    for tlb entries, at the moment tlb_flush() will also flush all
    tlb entries in the flush_global == false case.

That's why I get impression on QEMU flush the entire tlb. So it could flush
particular tlb entry in tlb_flush_entry?


Wei-Ren Chen (陳韋任)
Computer Systems Lab, Institute of Information Science,
Academia Sinica, Taiwan (R.O.C.)
Tel:886-2-2788-3799 #1667
Homepage: http://people.cs.nctu.edu.tw/~chenwj

reply via email to

[Prev in Thread] Current Thread [Next in Thread]