[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: CVS help needed, top and htop

From: Samuel Thibault
Subject: Re: CVS help needed, top and htop
Date: Wed, 13 Aug 2008 23:23:39 +0100
User-agent: Mutt/1.5.12-2006-07-14

Madhusudan C.S, le Thu 14 Aug 2008 01:37:27 +0530, a écrit :
> First of all, I wanted a small help regarding CVS. Should I now simply
> copy the directory of my project i.e procfs into the Hurd Source Tree
> of my branch that I have already checked out and just do cvs add procfs
> and commit it or should I do a cvs import of my project ?

To add code to an existing module, just do add.

> And can you please suggest me how should the entry in the Changelog
> and also the commit log for the very first log be ?

The current practice seems to be to just write you have added the files,
see hurd/console/ChangeLog for instance.

> 2006-07-13 Madhusudan.C.S. <madhusudancs@gmail.com>
>         * Initial commit of the procfs translator. The entire history
>            of development is available at
>            http://github.com/madhusudancs/procfs/tree/master.
>            This provides a GNU/Linux compatible /proc filesystem
>            which can now support procps tools like pgrep, pkill, kill
>            tload, top, free and psmisc tools like killall and pstree.
>            It also supports htop to run over it.

Please re-read the GNU ChangeLog style :)

ChangeLog is not meant as a general documentation, just as a _change_
documentation, so here just adding files (but the url could go there
indeed).  General description should go e.g. in a README.

> Also now regarding the porting of these tools, top and htop mostly
> works as I have written in the wiki page. The per-PID shared memory
> coloumn and the non per-PID Caches and Buffers fields are the only
> fields not working as they work in GNU/Linux now. Any suggestions
> on these?

These indeed aren't available from GNU Mach (shared memory) and ext2fs
(cache) I think.  We don't have a distinction between buffers and cache
I think.

> Also what do you intend me to do next? I was just thinking to stop
> adding new features for a short time now and concentrate on docu-
> mentation and fixing few small bugs I have indentified in the code.
> But what do you suggest please?

As per google recommendations, the last week could be this indeed.

> And I have one very libnetfs specific host, to implement
> /proc/self, I need to identify the process i.e the client which accesses
> the procfs in someway, say the PID of the client, how to get that?

Mmm. in hurd/process.defs I can see a proc_task2pid().  I couldn't find
a way to get the task_t of the requester however.  I would have thought
that it would have been somewhere in the protid structure, but couldn't
find anything obvious except the pi part, which might be used to get to
the task_t, but I don't know RPCs enough to know whether that's even
possible :/

> 2. In the file proc/version.c, the procps determines the version
>     of Linux kernel to which the /proc belongs to and as I suppose
>     it changes the way it reads the fields from /proc files vary.
>     But on Hurd this version is returned as 0.3.0 which is in no
>     way valid for procps. As of now I have Hard Coded it to
>     2.6.18, since the procfs I have written in most compatible
>     with that version of Linux. So how do you suggest I patch it?
>     Atleast here we need to differentiate between Hurd's version
>    of procps and Linux's version of procps. How do I do it?

Mmm, are you reading the _patched_ version of the debian
source? KFreeBSD people have gotten it to use what is /proc/version
instead of uname, so you can just provide what you prefer in

> 3. As per what we do above, the value of Hertz is determined
>     initially. So if its 2.6.x it uses the elf binaries methods to get
>     Hertz value from it other wise it calls Old_Hertz () [ An
>     incredible hack, in antrik's words ;-) ] function which then
>     may read from /proc/uptime file to determine Hertz.

I'd think Old_Hertz() should work enough for us.


reply via email to

[Prev in Thread] Current Thread [Next in Thread]