Hi list -
For a particular application, I need to make a block that can save a ton of history in some type of circular buffer - think 10M+ samples - and the entire buffer needs to be available inside of a call to work. It seems like I have two choices:
1) Implement my own internal buffer, and create a state machine that copies a batch of samples into the large buffer during each call to work. After doing that, run the logic that needs access to huge history.
2) Tell the scheduler to make the already-existing upstream buffer large enough that I can declare an enormous history in my block.
I understand that things like page size, architecture, available memory, etc. come into play, but realistically will I be able to accomplish what I want with setting a large output buffer on the upstream block (using set_min_output_buffer) and using a set_history call on my block with a large number?
Thanks,
Sean