[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Simulavr-devel] simulavrxx timers

From: Klaus Rudolph
Subject: Re: [Simulavr-devel] simulavrxx timers
Date: Tue, 27 May 2008 15:33:13 +0200
User-agent: Thunderbird (Windows/20080421)

Hi Michael,

Klaus Rudolph wrote:
My thought is to write global timer feature classes, e.g. output compare, and build nested timer classes from the feature classes.

A long time ago I thought that some more generic classes and inheritance could solve many problems. But after looking in the details this plan was knocked out.

It would be interesting to understand the specifics
about this decision. Does this mean that abstraction
is not an option and that each hardware element that
is different in some way must be re-written from
Not everytime :-) In the past I did not find any "base" timer functionality for avr and inherited specialisation for mega and others. That is what I want to say... Maybe there is the timer register 8 or 16 bit itself which could be a base and methods for functionality could be derived... that makes no sense from my point of few.

To have to many switch/if inside the class for

IMHO, with few exceptions, switch statements in OO
languages (C++) are an abomination. Good OO should
be using polymorphism rather than switch statements.
This is what I say! If you derive a specialized timer for ech device and the base has not really much information and abstraction, you can simply drop that design and makes things easier. Always a question on overhead and usage.

costs to much runtime I think.

This statement bothers me. What is the standard/
requirement that defines "too much?"

Maybe a value calculated by fuzzy logic? There is no measurable minimum rate which the simulation has to fulfill. But my standard use case is to simulate 10 and more cores in time and all are interconnected on a bus system. In this use case an extra overhead of 10% for a "nice" design means hours for me.

Are we expecting the simulator to run at least 100%
"as fast" as *any* AVR? 150%? 50%? Including I/O
to the simulation environment?

Actually simulation for a 16MHz clocked 8515 is faster on my pc then on real device, if no trace is enabled. But as written above, I need multiple cores and traces and connected nets.

For my simulation purposes, 50% for a modestly clocked
AVR seems more than reasonable, and easily achievable
with a modern host CPU.

The 68K/CPU32 simulator that I helped develop
executed faster than a real 16MHz CPU32 system and
used polymorphic C++ in the peripheral simulation

Beware of premature optimization.

I only want to tell you that runtime is a requirement. I have seen lots of very "well" designed code, which was nice to read but generate wast executables. Normally a good design could fulfill both: runtime and beauty :-)

Lets stop discussion here... it is not a big deal to become new timers as fast as the older ones. So simply lets start :-)

But I have actually no experience with the newer avr's so
it is up to you to make a more general design for the newer devices.

Analyzing the many AVR peripheral variations and
developing a properly factored design is a real
time consumer.

Yes, even if there is no delta or inheritance sheet from atmel! I asked that many persons at atmel but I never got any response! So we have to dig through thousands of lines in the data sheets... that was the point I stopped active developing, because I have no fun on doing stupid things which have already been done by others.

If refactoring is applied as new devices are
implemented, a peripheral framework can evolve,
rather than being designed all at once. The code
will be volatile in the early stages, and eventually
converge. The volatility can be reduced by doing
more up-front analysis and design... a
basic engineering trade-off.

A bit of theoretical discussion :-) Normally programmers have not all informations in brain before doing the real work. Any kind of front up engineering will end in an theoretical house where the grounding is missing :-) My personal way to do projects is to dig into the well known facts, get an idea, realize a bit of work, and refactor as needed. This keeps things ongoing, testable and usable in early stages. It makes no sense for me! to stay 100 days with intergalactic brain work, because you canĀ“t get more information by discussing the same theoretical items again and again. But that is my way. I have no problems with volatile code. This is the time where regression testing comes in place :-)

IMHO, its the active itch scratchers that decide.

Sorry, my English is bad enough to not understand that?! Sorry :-)


reply via email to

[Prev in Thread] Current Thread [Next in Thread]