qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v5 04/13] mm/shmem: Restrict MFD_INACCESSIBLE memory against


From: Andy Lutomirski
Subject: Re: [PATCH v5 04/13] mm/shmem: Restrict MFD_INACCESSIBLE memory against RLIMIT_MEMLOCK
Date: Thu, 07 Apr 2022 10:09:55 -0700
User-agent: Cyrus-JMAP/3.7.0-alpha0-386-g4174665229-fm-20220406.001-g41746652


On Thu, Apr 7, 2022, at 9:05 AM, Sean Christopherson wrote:
> On Thu, Mar 10, 2022, Chao Peng wrote:
>> Since page migration / swapping is not supported yet, MFD_INACCESSIBLE
>> memory behave like longterm pinned pages and thus should be accounted to
>> mm->pinned_vm and be restricted by RLIMIT_MEMLOCK.
>> 
>> Signed-off-by: Chao Peng <chao.p.peng@linux.intel.com>
>> ---
>>  mm/shmem.c | 25 ++++++++++++++++++++++++-
>>  1 file changed, 24 insertions(+), 1 deletion(-)
>> 
>> diff --git a/mm/shmem.c b/mm/shmem.c
>> index 7b43e274c9a2..ae46fb96494b 100644
>> --- a/mm/shmem.c
>> +++ b/mm/shmem.c
>> @@ -915,14 +915,17 @@ static void notify_fallocate(struct inode *inode, 
>> pgoff_t start, pgoff_t end)
>>  static void notify_invalidate_page(struct inode *inode, struct folio *folio,
>>                                 pgoff_t start, pgoff_t end)
>>  {
>> -#ifdef CONFIG_MEMFILE_NOTIFIER
>>      struct shmem_inode_info *info = SHMEM_I(inode);
>>  
>> +#ifdef CONFIG_MEMFILE_NOTIFIER
>>      start = max(start, folio->index);
>>      end = min(end, folio->index + folio_nr_pages(folio));
>>  
>>      memfile_notifier_invalidate(&info->memfile_notifiers, start, end);
>>  #endif
>> +
>> +    if (info->xflags & SHM_F_INACCESSIBLE)
>> +            atomic64_sub(end - start, &current->mm->pinned_vm);
>
> As Vishal's to-be-posted selftest discovered, this is broken as 
> current->mm may
> be NULL.  Or it may be a completely different mm, e.g. AFAICT there's 
> nothing that
> prevents a different process from punching hole in the shmem backing.
>

How about just not charging the mm in the first place?  There’s precedent: 
ramfs and hugetlbfs (at least sometimes — I’ve lost track of the current 
status).

In any case, for an administrator to try to assemble the various rlimits into a 
coherent policy is, and always has been, quite messy. ISTM cgroup limits, which 
can actually add across processes usefully, are much better.

So, aside from the fact that these fds aren’t in a filesystem and are thus 
available by default, I’m not convinced that this accounting is useful or 
necessary.

Maybe we could just have some switch require to enable creation of private 
memory in the first place, and anyone who flips that switch without configuring 
cgroups is subject to DoS.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]