positron96 wrote:Any thoughts about practical effects of this fact? Will it impact PIDs, motor synchronization, precise timings and delays?
As far as I'm aware, the kernel on the EV3 is configured to switch contexts between threads every 10ms (unless the thread releases the CPU earlier by calling something like yield(), wait(), etc.) in the original Lego firmware. This means, that even if your thread has a pretty high priority, once it releases the CPU, a thread won't get CPU time for at least another 10ms in the worst-case. Given the fact that the Lego only has one process (namely their own VM), this might not impact their VM performance. However, for us this sucks as Java threads are native Linux threads! And we probably should not expect, that all threads are cooperative (by calling yield). It might also impact the start of motors as a context switch might be performed between starting motor 1 and motor 2. Also, there is no API that ensures atomicity. The good news is, that the kernel can be configured to switch contexts every 1ms. I'm not sure, whether Andy has done so. Also consider, that context switches cause overhead. So increasing their frequency also increases overhead. The 1ms setting was actually designed to improve responsiveness of the X server on Linux pcs.
positron96 wrote:(Hmm, after some thoughts, is lejos on NXT a realtime software?)
The NXT firmware switches thread contexts every 1ms. Also, the scheduler of the NXT firmware is much more predictable. For example, a thread with high priority would be preferred by the scheduler over any thread with lower priority. So high priority threads could be pretty sure to be scheduled as soon as possible. The motor regulation was typically running as a high priority thread. But as you already guessed, the NXT firmware was no realtime system either. Threads with the same priority are served round-robin.
Oh and once in a while, the garbage collector will halt the whole JVM. This is always true for the NXT. But the Oracle JVM used on the EV3 uses concurrent garbage collectors. However, in the worst case, even those fall back to halting the whole JVM. The trick to avoid that is to reuse object, arrays, and such stuff as far as possible in performance critical code paths.