Showing posts with label CPU time. Show all posts
Showing posts with label CPU time. Show all posts

Tuesday, September 6, 2016

When is it signaled?

Signals are a mechanism for notifying a process of a simple event.  Most programmers and programs can ignore them, as the default behaviors are reasonable and taking steps to handle them would greatly increase the program complexity.  But when teaching future computer scientists, we want the programmers to know about these mechanisms and therefore properly understand the functioning of the system.

In working with signals, the developing programmers are often exposed to their first dose of concurrency.  The idea that execution can be happening in simultaneous, arbitrary orders except when action is taken by the program.  With signals, a program can do several things:

  • Provide and install a handler for one or more signals
  • Block the receipt of one or more signals
  • Send a signal to one or more processes

We are going to consider the problem of sending and receiving a signal.  After a signal is sent, when does the other process receive it?  Students make the assumption that when the sending function (i.e., kill) has returned, the signal has been sent *and received*.  However, I have found no text that explicitly guarantees this condition.  Instead, I prepared a simple program (source at the end of this post) to test this condition.

As signals are communicated through the operating system, we want a different mechanism for measuring simultaneity, in this case shared memory.  The experiment program will create and set up a small space of shared memory between two processes.  Next, it will wait until both programs are running in a known state (i.e., barrier).  Then one (the parent) will signal the other (the child), while measuring how long it takes to send, as well as receive the signal.  Finally, run this experiment a million times.

On my current Ubuntu box with Skylake processors, the experimental measurements show that 80% of the time, the child lasts for an average of 40 cycles after kill returns.  The maximum time is almost 1200 cycles.  Assuming that each core's clock has a smaller skew, this effectively means that the child can continue to run even after the function has returned.

Source code follows: (compiled with gcc version 4.8.3, with gcc -O3 -lrt)
#include <sys/mman.h>
#include <sys/types.h>

#include <sys/wait.h>
#include <sys/stat.h> /* For mode constants */
#include <fcntl.h> /* For O_* constants */
#include <stdio.h>
#include <signal.h>
#include <unistd.h>
#include <stdint.h>

#include "rdtsc.h"

static const int ITER = 1000 * 1000;

int main(int argc, char** argv)
{
    int shm_fd = shm_open("squig", O_CREAT | O_RDWR, (S_IREAD | S_IWRITE));
    int count = 0, i;
    uint64_t zero = 0, min = ~0x0, max = 0, sum = 0;
    uint64_t minN = ~0x0, maxN = 0, sumN = 0;
    write(shm_fd, &zero, 8);  // Give the shared file "space"
    void* msh = mmap(NULL, 4 * 1024, PROT_READ | PROT_WRITE, MAP_SHARED, shm_fd, 0);
    volatile uint64_t* tsc = msh;
    
    for (i = 0; i < ITER; i++)
    {
        *tsc = 0;
        pid_t chd = fork();
        
        if (chd == 0)
        {
            // Give the compiler something to think about
            while(zero < ~0x0) { *tsc = rdtsc(); zero++;}
        }
        else
        {
            // Wait for the child
            while (*tsc == 0) {}
            
            uint64_t st = *tsc; // When is it for the child?
            kill(chd, SIGINT); // Send SIGINT to child
            uint64_t k = rdtsc(); // Returned from kill
            wait(NULL); // Reap
            uint64_t delta = 0;
            uint64_t en = *tsc; // When did the child end?
            
            // K >, implies that kill returned after the child terminated
            if (k > en)
            {
                count ++;
                delta = k - en;
                if (delta < minN) minN = delta;
                if (delta > maxN) maxN = delta;
                sumN += delta;
            }
            else
            {
                delta = en - k;
                if (delta < min) min = delta;
                if (delta > max) max = delta;
                sum += delta;
            }
        }
    }
    
    printf("Min: %lx, Max: %lx, Avg: %lx\n", min, max, (sum / (ITER - count)));
    printf("Min: %lx, Max: %lx, Avg: %lx\n", minN, maxN, (sumN / (count)));
    printf("Percent Parent After: %lf\n", (count / (double)ITER));
    
    return 0;
}

Update: Results also hold when sending SIGKILL.


Friday, June 21, 2013

What is Time: Keeping Time in Software

In a prior post, Measuring Performance, I discussed some techniques for how a developer might measure the performance of his or her application.  However, sometimes what you want is to measure an interval of time.  We can generally agree to do this by taking two measurements: the time before and the time after.  Then find the difference, which is the interval of time elapsed.  Great!  But how exactly are we going to measure time?

A common theme to the questions I answer on stackoverflow is proper measurement of time (see for example, Analysis of CPU caches, Function Pointers versus Inline Functions, or Windows system time with millisecond resolution).  A programmer starts with the tools most readily available, usually a call to gettimeofday or GetTickCount (for example).  These functions provide time in (roughly) millisecond resolution, like having a stopwatch.  For its own timekeeping purposes, an OS sets up hardware to send an interrupt every ~16ms (i.e., 64Hz via the PIT).  This interrupt primarily denotes a scheduling quantum, but for little additional overhead, the OS updates its clock.  Some applications need finer-grained scheduling and increase the frequency of the interrupt (i.e., decrease the quantum), which increases overhead and power usage.  Using these calls, the programmer can measure events that take at least 1 centisecond with reasonable accuracy.

But, you object, I want better accuracy!  Let's turn to the HPET then.  A hardware device that internally operates at 10+MHz (or 100ns ticks).  Thanks to this device, the system has a microsecond time source.  For this resolution, a programmer may use clock_gettime or QueryPerformanceCounter.  And both come also with supplemental routines that indicate the resolution of the time returned.  For example:

#include <stdio.h>
#include <time.h>

int main(int argc, char** argv)
{

    struct timespec n, o;
    clockid_t t = CLOCK_MONOTONIC;

    clock_getres(t, &n);
    printf("%d - %d\n",n.tv_sec, n.tv_nsec);
    clock_gettime(t, &n);

    clock_gettime(t, &o);
    printf("%d - %d\n",n.tv_sec, n.tv_nsec);
    printf("%d - %d\n",o.tv_sec, o.tv_nsec);
 

    return 0;
}


% ./t_test
0 - 999848
13390932 - 798720095

13390932 - 798720095

This time source has a reported resolution of approximately 1ms; however, sometimes the printed "n,o" pair actually shows a one microsecond difference, which suggests that the return from clock_getres could be smaller.  Thus on my test machine, I could measure time by microseconds.

But, you object, I want better accuracy!  So let's move on to the timestamp counter then.  This counter is per-processor and runs at the processor's frequency (1+ GHz or <1ns per tick).  clock_gettime and QueryPerformanceCounter may use this time source.  So we can query it a nanosecond accurate time measurement.  Done, or are we?

But, I object, it takes time to do this measurement; perhaps a very small amount of time, but time nonetheless.  If we switch the code above to use CLOCK_PROCESS_CPUTIME_ID, the measurements show a 400 - 500ns difference between n and o, even though the reported "resolution" is 1ns.  Did the function lie about its resolution?  No.  Imagine calling NIST or CERN to get the time from their clock.  Their clock is accurate to better than 1ns, but it takes time to call them.  In the code example, each call is taking about 400ns to make the function call(s), query the timestamp and then convert into the appropriate format.  Note that this overhead was also present in the earlier experiments, which is why the earlier output occasionally differed by 1 microsecond, as the 400ns of overhead caused the measurements to fall in different microsecond ticks.

But, you object, I really need to know how many nanoseconds this something takes.  There are two approaches to remedy this: first, we could write some code does the above repeatedly to determine the overhead and then subtract it from the time difference, or find a different time source.

These function calls are querying the timestamp counter, but architects have exposed the counter to programmers for years, so why make the call?  In x86, it is the instruction RDTSC or RDTSCP, which provides the 64-bit timestamp value.  A single instruction is about as small of overhead as we're going to get (although, we can also look at register pressure, cycles required, etc).

Hah!  You exclaim, now I can know which instruction sequence is fastest, because I can measure CPU cycles.  But at this timescale, your program's measurement has left the "classical" architecture model of executing one instruction at a time and is drifting into the reality of many 10s of instructions executing simultaneously.  When you toss in your RDTSC, there is no guarantee where in this mix of instructions it will execute and complete (with RDTSCP, you can guarantee that all previous instructions have completed).  So it is recommended that even if you are measuring a "small instruction sequence" like some of the cited stackoverflow questions, that you perform thousands of operations and measure the time of this aggregate.  Of course, architectural behaviors like branch prediction, cache misses, prefetchers, etc will change over the course of the experiment.

As if this wasn't bad enough, the timestamp counter (including calls to CLOCK_PROCESS_CPUTIME_ID) is specific to each processor.  The counter is not global system time, so taking measurements on different processors (say different threads or a thread that migrated) will not necessarily give meaningful results.  Many modern OSes try to set the counters so that the measurements are within a small margin of each other.  In practice, I have observed deltas from low 100s to 10s of millions, so be warned.

Even worse, older processors really did count CPU cycles as the timestamp, so if the processor slowed down, then a "cycle" meant more nanoseconds.  Modern chips have invariant TSC, per Intel architecture manual:

The invariant TSC will run at a constant rate in all ACPI P-, C-. and T-states. This is the architectural behavior moving forward. On processors with invariant TSC support, the OS may use the TSC for wall clock timer services (instead of ACPI or HPET timers). TSC reads are much more efficient and do not incur the overhead associated with a ring transition or access to a platform resource.
As bonus, all of the clocks can be affected by thermal drift, as their frequency is related to their temperature.  Others have investigated this behavior in greater detail, and while it can be accounted for, in practice I've happily ignored it (although I should note in future writing that I have not controlled for this possible measurement bias).

In conclusion, I use RDTSC for most of my measurements and try to control for the clock difference that exists between processors.  In my current PhD research, I am still working to try to control and properly account for this difference so that I can measure the behavior that I am interested in.  So be warned and be cautious, and be as precise as you need to be.