--------------------------------------------------------------------------- CM questions 1. a parallel update idea. We can add a data offset field to each context, and set it on access. Then the update thread, running in background, would just skip updates on contexts that got used again before getting updated. 2. parallel CM stuff. When trying to thread it, it somehow always turns into something like BWT - sorting the contexts, then processing with a single model. But then, its only well defined for typical order-N models. 2.1. Is there a way to do something similar to unordered set of masked contexts and mixers (with contexts too)? 2.2. To be specific, is there a way to avoid coding chunk sizes and just derive them from the data, like BWT does? 3. mixer fsm. We already know that a state machine is faster, better, and uses less memory than a pair of bit counts or a linear counter. We also more or less know how to build efficient state machines, so that's ok. But what about mixing? In theory (DMC tries to do it practically though) it may be possible to make a dynamic state machine for the whole data, but such a method would use statistics from a single context anyway, which has its limits. Anyway, preserving the state of previous occurrence of current o1 context within a single global state is just a waste of resources, so its still necessary to maintain multiple states and mix them somehow. And that "somehow" is the problem, because the dumb solution like s = T1[s1][s2][sm]; // s1,s2=context states, sm=mixer state sm= T2[s1][s2][sm]; // update would already require 32M at this point, which isn't possible for a single mixer, even if we'd forget about initialization of these tables. So, basically, we need a mixer that would work on a function of {s1,s2}, so that s = T1[T3[s1][s2]][sm] sm= T2[T3[s1][s2]][sm] would work with 64k tables (which is still too much though). Can we make such a version of logistic mixer?..