[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Linux-NTFS-Dev] NTFS resizer

From: Andrew Clausen
Subject: Re: [Linux-NTFS-Dev] NTFS resizer
Date: Thu, 9 Aug 2001 10:18:30 +1000
User-agent: Mutt/1.2.5i

On Wed, Aug 08, 2001 at 12:45:22PM +0100, Anton Altaparmakov wrote:
> >A convienient way for traversing attributes would be nice.
> You traverse them in the mft record. Any copying back and forth into 
> structures is stupid IMHO. That's what find_first_attr() and find_attr() 
> are for (admittedly, they could use a better handling of the search 
> context, i.e. dynamic allocation and a special function for cleaning it up 
> or something like that).
> You have a structure: the mft record. It contains all attributes nicely 
> sorted.

It contains the on-disk attributes, not ntfs_attr.

> Why make linked lists or other constructs? Just work directly on 
> it. That's what TNG does and that is IMO the most effective, most reliable, 
> easiest to code and fastest code possible.

CPU overhead isn't really an issue in userspace.  (Which isn't to
say we should be sloppy, rather "premature optimization is the root
of all evil")

> Much like zero copy IO in the 
> network drivers layer... (I know libntfs has some stuff about mapping 
> attribute values but I have come to think that is probably not effective. 
> Well it might be but done a different way altogether. The code overhead 
> introduced by maintining the existing structures would be huge.)

I think attributes need to be abstracted from MFT records, because
they may need to be shuffled around between MFTs.  (Eg: an attribute
gets inserted, or a run-list grows...)

I was thinking this shuffling should happen when the file is synced
to disk.  (ntfs_file_sync()?)

BTW: I would cache by file, not MFT in this case.

So, basically, when you get a file (i.e. a file isn't in the cache,
and you request via ntfs_volume_get_file(), or whatever), it would
something like:

MFT_REF _parse_mft_record_attrs(ntfs_file *file, MFT_RECORD rec)
        for [all attributes in mrec] {
                attr = malloc(sizeof(ntfs_attr));
                _attr_parse (attr, &rec + offset);
                _file_add_attr (file, attr);    // add to linked list

        return next_record;

ntfs_file *ntfs_file_get_from_disk(ntfs_volume *v, MFT_REF mref) (
        ntfs_file       *file;
        MFT_RECORD      rec;

        file = malloc(sizeof(ntfs_file));
        file->dirty = 0;

        ntfs_volume_read_mft_record(v, mref, &rec);
        _file_parse_mft_record_header(file, &rec);

        while ((mref = _file_parse_mft_record_attrs(file, &rec)))
                ntfs_volume_read_mft_record(v, mref, &rec);

        return file;

I don't have any strong attraction to this approach, but it seems
to be easy to implement, and has a nice interface.  I don't think
CPU efficiency is an issue.  IO-wise, it looks ok... it supports
caching, etc.

> >Perhaps make attribute a linked list, with the head in ntfs_file?
> >(When you do ntfs_file_get(), you read in the important bits of the
> >attributes - basically, everything resident in the MFT...)
> Keeping a linked list of pointers into the mft record just brings huge 
> overhead because ANY time your write to the mft record you have to discard 
> the list and redo from scratch or at least you have to verify it is still 
> valid and update it if not.

Huh?  Presumebly, the only way you write to the MFT record is via
syncing a dirty file.  So, you keep the disk in sync with the internal
representation, not the other way around.

ntfs_file_sync() would look something like:

BOOL ntfs_file_sync(ntfs_file* file)
        if (!file->dirty) return TRUE;

        for [attributes] {
                // syncs the stream, and creates the ATTR_REC
                ntfs_attr_sync (attr);

        for [attributes] {
                if (pos + attr->a_rec->length > mft_rec_size) {
                        // write this record
                        // create / use an existing extension MFT rec
                memcpy (mft_rec + pos, attr->a_rec, attr->a_rec->length);
        // write the record

        file->dirty = 0;

Yes, it is lots of memcpy(), but more elegant IMHO.  We don't care
about memcpy() here, right?  (BTW: you can remove most of the
memcpy() by keeping track of what's changed, etc., if it's really

> >Is this all sane?  I'm still really new to all this ;)
> The file stuff is sane. Having convenient functions for accessing 
> attributes is sane. But having linked lists of attributes is not.
> At the low level it should work like this:
> 1. map_mft_record(inode struct, record number): returns the MFT_RECORD* and 
> locks it so no other threads can access it at the same time.
> 2. do whatever you want with the attributes in this mft record writing 
> whatever access functions you want for it.
> 3. unmap_mft_record(inode struct): releases the MFT_RECORD* back to the 
> library.

I'm not convinced we want this [un]map() thing on records.  Records
aren't accessed on their own, except in the context of a file.  Files
are the atomic pieces... So, I think we should just have {read,write}
on records, and [un]map() on files, although I've called it get()
here.  (unmap() is just free() on a clean file)

> This low level is then wrapped by the file access functions for example.
> But then again my views are really based around a kernel centric 
> implementation.

Indeed ;)

> The user space side doesn't have to work this way. It just 
> would be nice if the two are consistent from a maintenance point of view... 
> or at least that they are as consistent as possible.

Yeah, I can understand this.  But, I think the kernel code is going to
be a nightmare here... (there's a reason it hasn't be written yet ;)

> Having said all that: if you were to implement something different and it 
> works nicely than I would have no problem with using it. I would much 
> rather use something that doesn't fit my design but DOES work nicely and 
> has been written already rather than not have anything because I have not 
> time to write it myself!

That's very open-minded of you :)


reply via email to

[Prev in Thread] Current Thread [Next in Thread]