In my opinion the most important issue is the choice of the time
slice for the analysis; e.g. in the sleep EEG analyses we processed
data by 2.5 sec slices (256 samples/slice, as the sampling frequency
was 102.4 Hz) and used for graphing and subsequent aggregation
of the data. I mention this as I wouldn't like to see people believing
they should try to compute Sigma/Phi/Omega for one full day or so!
For the first exploratory steps, I'd go rather for short slices.
Omega complexity is a function of the eigenvalues of the covariance
matrix, and to have a good estimate of the matrix, a reasonable number
of data vectors (i.e. simultaneous readings of the REGs) should be
used; it should be at least 3-4 times number of `channels'. E.g. with
27 REGs sampled once a second, two minutes slices (120 vectors)
would do fine; that is, you should split the stream by chunks of 120 data
vectors, subtract the theoretical mean (100?) from all data vectors,
and then process the data chunks with parameters -N 120,27 and
write the output of `nsfo' as consecutive triplets of Sigma, Phi and
Omega values. This will result into a new data series, e.g. 720 x 3
values per day, suitable for graphic presentation or subsequent analyses.
Of course, you could go for larger data slices, say 5 minutes, which
in the example above (still assuming 1 REGs data vector / second)
would be -N 300,27, and we would have 1440/5 = 288 data points / day.
The decision re data slicing is up to you. The shorter the slice, the higher
the time resolution of the processed data series, at the costs of less
reliable (more noisy) estimates of the parameters. The longer the slice,
the lower time resolution, and the higher data reduction rate and more
robust estimates. Just play a little bit with the parameters...
I'm looking forward to the results!
All my best,
Jiri