Let's say I wanted to know the time of execution for some command with an accuracy of 1/10 of a millisecond. To be clear this command differs quite a lot in the time it takes to execute and all I care about is when it was first called NOT how long it takes to run! Then what I might do in Python is this:
start_time = time.timeexecuteMyCommand()in which start_time would be my answer... however, would the kernel tick frequency of (100Hz) mean that there is a chance that the actual time of execution is a couple of milliseconds later, since other commands might be given priority?
- Is this the case or does it work differently?
- If it is indeed the case is it possible to resolve somehow (with the exception of increasing tick frequency or running a real-time distro)?
I've also looked at time.process_time(), but if I understand it correctly it would not be useful in this case.