On unix systems, libfaketime is the de facto standard solution for this problem in software. It performs a sort of man-in-the-middle operation between your application and the system's time library. The requests are passed through to the system, but the replies are modified as needed. It also modifies the results of stat()
calls, so file modification times & c. are also modified.
The settings are made with environment variables, so you might have to learn about those if you're not already familiar with their workings.
libfaketime supports relative and absolute offsets, stopped and running time and speeding up or slowing down. I don't know about running backwards, though.
Note that the current version pretends seconds are the highest resolution of time there is, and passes sub-second parts of responses through to the application unmodified. Thus, if you slow down the clock to half speed, a sub-seconds aware program making very rapid time calls will experience the whole second twice after each other rather than once just taking twice as long. If the clock is stopped, this means the application sees a random time within the selected second.
Unfortunately, Github is down today, so the docs are unavailable right now, but that will probably be fixed soon.
Edit:
With Github back up I tried it, and found the code for speed change to be general enough that it works well with negative values or anything else accepted by atof()
.
However, a different issue became apparent: For speed change, a time origin is necessary in addition to the real current time and the desired time offset. This origin is set independently for subprocesses, so that (when running backwards at nominal speed) when the parent process has retracted one minute, a newly spawned subprocess will see the current time as two minutes later than the parent process does.