|
From: | Marcus Müller |
Subject: | Re: [Discuss-gnuradio] Continuously Write FFT Samples to a File |
Date: | Sat, 21 Jan 2017 20:04:36 +0100 |
User-agent: | Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.2.0 |
Hi Mallesham, that does indeed sound interesting, but you first of all have a
local problem – that of data volume concentration on your single
receiver node. 32MS/s is already more than you can shift out
through a single Gigabit Ethernet connection – so either you must
immediately update to more datacenter-style interconnects, or you
must start thinking about consolidating your data where it
happens. On the other hand, compared to other SDR systems, a mere
32 MS/s from a single channel with a non-100% duty cycle is "not
that much"; I really feel like you might be running this on
slightly undersized hardware. I, again, ask you to describe what you *want* rather than what
you *do* – a system specification is very crucial here, and I hope
that Greg agrees with my opinion that the possibility to handle
Big Data (whatever that is, in the end) alone is not a solution to
a data problem. Partitioning, analyzing, reducing / compressing,
filtering and discarding of data can only be designed if you have
a clear concept of what your target is – and in the case of signal
processing, much more than in many other big data applications,
that concept is often pretty well-known a priori. So, whilst I really think that you're on to something very interesting here, combining distributed computing with SDR, and hope you can share a lot of your insights in the future, I also really think that you should start with a well-though out design of what you want to process and store. This far, you've only told us you have "FFT data" (with which you imply "spectral power estimates", which already is a reduction by a half), but you haven't really explained how much, in how much detail, you need that. A lot of interesting aspects might arise from that – for example, if you're really after power spectra, a logarithmic storage (dB!) would make a lot of sense; combine that with storing these logarithmic values in a fixed-point format could easily save you another factor of two in storage bandwidth – without you ever losing the "essence" of your data. The way in which you capture your data might, as Greg mentioned, be a key indicator of the granularity in which you distribute it. In short: it might be helpful if you could formulate what you want to *do* with your data, not only how you want to do that. Best regards, Marcus On 01/21/2017 07:37 PM, Mallesham
Dasari wrote:
|
[Prev in Thread] | Current Thread | [Next in Thread] |