|
From: | Marcus D. Leech |
Subject: | Re: GRC max sample rate |
Date: | Sun, 26 Jul 2020 14:48:01 -0400 |
User-agent: | Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.7.0 |
On 07/26/2020 02:10 AM, Koyel Das
(Vehere) wrote:
Let us for a moment pretend that we aren't talking about SDRs and USRPs and DSP. Let us cast the problem more generally as "I need my computer to do 200 million 'things' every second--can my computer do this?". The answer to that question is categorically NOT simple and well characterized. It really isn't. The analysis breaks down along a number of different dimensions, including: o What are the 'things' you want to do at 200e6 'things per second'--how many CPU instructions does 'thing' decompose into, on average? o What is your CPU architecture? Can it even handle that many instructions/second? o Is your CPU heavily or lightly pipelined? o Is it RISC or CISC? o Is there meaningful branch prediction and cache pre-fill? o Are there more than one CPUs in a given system o Does the CPU architecture have internal parallelism--multiple execution units for common operations like basic arithmetic o Are there multiple instruction-decode units in a single CPU? o Are floating-math units shared among CPUs on the chip, or are there dedicated FPUs? Is there inherent parallelism in the FPU? o Are there vector execution units available, and does the software take advantage of them o What is the memory architecture? How fast is it? o What is the cache size--are there separate I and D caches? How is cache managed? o What is the basic clock-rate of the CPU, and how does that decompose into memory bandwidth and instruction-fetch and other sub-cycles? o What is the I/O architecture, and how fast are the relevant buses--are there multiple buses? o Do I have enough RAM so that there's never any pressure on it so that large working-set sizes can be maintained? o What of the application? What is the average code-path length for each 'thing' that needs to be done? o Are there opportunities for parallelism in the application? Have they been taken advantage of? o Does the operating system schedule CPU-bound jobs appropriately? Can this be tweaked? This is a fairly-standard "capacity planning" exercise that is, at a high level, unrelated to USRPs or SDRs or DSP. Being robustly successful with SDR requires knowledge and experience in a number of disciplines including: RF and analog design Sometimes FPGA development and design Digital signal processing Computer system design and capacity planning Software development methodologies Non-trivial algorithm comprehension and development If you, as an individual, or in aggregate as a development team, are missing too many of these, robust success will be a much longer time coming. That's just an inherent property of doing SDR development, and deploying SDR-based systems. |
[Prev in Thread] | Current Thread | [Next in Thread] |