Time calculation in the main game loop

2

There is the code in the Quake 2 main game loop implementation:

if (!initialized)
{   // let base retain 16 bits of effectively random data
    base = timeGetTime() & 0xffff0000;
    initialized = true;
}
curtime = timeGetTime() - base;

I'm wondering about the line base = timeGetTime() & 0xffff0000. Why are they applying the 0xffff0000 mask on the retrieved time? Why not to use just:

if (!initialized)
{   // let base retain 16 bits of effectively random data
    initialTime = timeGetTime();
    initialized = true;
}
curtime = timeGetTime() - initialTime;

??? What is the role of that mask?

c++
visual-studio
asked on Stack Overflow Aug 24, 2020 by Derek81 • edited Aug 25, 2020 by Derek81

3 Answers

0

The if (!initialized) check will only pass once. Therefore curtime will become larger with each gameloop, which wouldn't be the case with suggested rewrite, since the upper word may increase after sufficiently many gameloops.

Likely this is in preparation of an int-to-float conversion where smaller numbers result in higher accuracy. (Such as adjusting for the time between two frames and numerically integrating game states over time; but also rendering smooth animations)

answered on Stack Overflow Aug 24, 2020 by Hi - I love SO
0

The way it is implemented, base, and hence curtime, will assume different values depending on initialized being true or false.

answered on Stack Overflow Aug 24, 2020 by Giogre
0

According to Microsoft Docs, the description of timeGetTime() is:

The timeGetTime function retrieves the system time, in milliseconds. The system time is the time elapsed since Windows was started.

Remarks

The only difference between this function and the timeGetSystemTime function is that timeGetSystemTime uses the MMTIME structure to return the system time. The timeGetTime function has less overhead than timeGetSystemTime.

Note that the value returned by the timeGetTime function is a DWORD value. The return value wraps around to 0 every 2^32 milliseconds, which is about 49.71 days. This can cause problems in code that directly uses the timeGetTime return value in computations, particularly where the value is used to control code execution. You should always use the difference between two timeGetTime return values in computations.

The default precision of the timeGetTime function can be five milliseconds or more, depending on the machine. You can use the timeBeginPeriod and timeEndPeriod functions to increase the precision of timeGetTime. If you do so, the minimum difference between successive values returned by timeGetTime can be as large as the minimum period value set using timeBeginPeriod and timeEndPeriod. Use the QueryPerformanceCounter and QueryPerformanceFrequency functions to measure short time intervals at a high resolution.

In my opinion, this will help improve the accuracy of time calculations.

answered on Stack Overflow Aug 25, 2020 by Barrnet Chou

User contributions licensed under CC BY-SA 3.0