A frequent question that I get is how many participants or repeated measures are needed for performing DSEM or related (multilevel) time series analyses. The answer is the typical "methodologist mic-drop" answer: It depends.
In particular, it depends on:
As a methodologist, I generally have no idea about either point 1 or 2, which means I generally cannot give you a reasonable answer to the question of how many participants or repeated measures you will need. You should be in general be suspicious of anyone without that knowlesge who does give you a specific number with confidence.
I can however give you pointers on how to find out. Typically, for these kind of analyses and models a (monte carlo) simulation study will be needed. This means data will be simulated from a reality you made up (i.e., point 2), and then on those simulated data you perform the analyses you plan to do on the real data (i.e., point 1). This is then repeated many times, and then you see how many times you achieved your goal (for example, how many times you detected an effect correctly) for a particular sample size. Based on that you figure out how large your sample size should be.
You can do such a study that from scratch yourself, or you can rely on tools available for this. I'll list the ones I know of below. However, all approaches will require at least some input based on 1) and 2). If you are interested in learning how to do simulate data yourself, or seeing how that works, you can visit my page on data simulation.
Hence, the main thing is that you will need to do first for any sample size analysis is think very carefully about 1. What do you want to learn, what analysis do you need to do for that exactly, and what outcomes from that analysis are exactly of interest. This means you need to know how you'll be measuring all variables that are part of the relevant analyses, and what models you will fit exactly, and what results from that are the ones you care about.
Next, for your outcomes of interest, consider what would be the smallest (closest to zero) true value for that, that you still find a meaningful effect. Meaningful in the sense that you'd still find that a relevant effect for your interests. For example, what would be the smallest effect that still warrants giving the new treatment you are investigating to people, instead of the usual treatment. You will base the true values of your parameters of interest based on that. I recommend against relying only on default effect sizes like cohen's d or explained variances or whatever, if you want a useful sample size analysis. Rather focus on your goals and outcomes.
Based on all this, you can pick true values for all the parameters of your model of interest that you'll need for many of the tools listed. This is generally not easy, especially for the parameters you are not directly interested in, but that may still affect your sample size analysis. (This also means you need to understand which ones those are, or get assistance from a methodologist). You will have to factor in the measurement scales of your variables, and that parameter values need to make sense with each other and the measurement scale. If you are unsure what would be reasonable values for certain parameters, you could consider multiple options for those, and obtain a sample size estimate for each scenario.
Finally, consider how precise you should be able to estimate those outcomes. For example, do you want to know mainly whether a parameter is positive rather than negative? Or do you need the estimate to be very precise (maybe you want a standard error of 0.01, or whatever is relevant in your case)? Maybe you want a particular sensitivity, or false positive rater, or whatnot. This will also affect how large your sample size should be; the more precision you need, the larger your sample should be.
If you have thought that through, there are various resources you can consider using to do some sample size analysis. Some of them are limited to quite specific models/settings that may or may not suit your context.