discuss-gnuradio
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Discuss-gnuradio] Re: UC professor says SW radio a bad idea


From: Vanu Bose
Subject: Re: [Discuss-gnuradio] Re: UC professor says SW radio a bad idea
Date: Tue, 05 Feb 2002 12:28:45 -0500

> Other speakers in the evening session, concentrating on the effects
> of low-voltage on digital circuits, offered conciliatory messages
> about the consequences. Bob Brodersen, the University of California
> professor who leads the Berkeley Wireless Research Center, pointed
> out that dedicated function architectures using large amounts of
> parallelism offered the highest efficiency - in terms of millions of
> operations per second (Mops) per milliwatt - per unit area of
> silicon. His thesis was based on an examination of the processors
> presented at ISSCC over the past 20 years.  The software-intensive
> general-purpose processors with high clock rates faired the worst in
> terms of Mops accomplished per milliwatt, he said. DSPs with
> parallel math operations show a more efficient use of current and
> voltage, or more Mops per mW. But the most efficient semiconductor
> devices - those demonstrating four orders of magnitude efficiency
> improvement - were dedicated processors for MPEG-2 and 802.11
> decoding. Such reasoning rules against a general-purpose processor -
> and software-intensive operations - for portable systems. "The
> software radio is a really bad idea," Brodersen concluded.



It is important to look at this statement in the right
context. Brodersen is an expert at very low power systems, and it will
always be true that you can build a lower power system by building a
dedicated ASIC, with just the right number of transistors to do the
job, rather than a processor which can do many things but is not
optimized to minimize power for a single task.

So if you want a low power, dedicated system, then I would agree with
Bob's assessment. However, if you want a flexible system that can
interoperate with multiple standards or be software upgraded to new
standards in the future, then a dedicated chip is a really bad idea. 

The bottom line is that there is a price to pay for flexibility. You
can pay it in power consumption buy using a processor rather than a
dedicated chip. Or you can pay it in size and cost by building
multiple dedicated chips into one system, and then just run the one
that you need at a given time. Dual mode cell phones use this
approach, but this doesn't scale well if you wanted to build in more
standards.

There are clearly markets and applications where any given approach is
the best, and there is no one approach that is globally better or
worse than the other.

-Vanu



reply via email to

[Prev in Thread] Current Thread [Next in Thread]