NULLs in q and in k.h


I've found different values for h NULLs between k.h and q:

q)0x00 vs 0W
q)0x00 vs 0N
q)0x00 vs 0Ni
q)0x00 vs 0Wi
q)0x00 vs 0Wh
q)0x00 vs 0Nh

In q it all looks familiar, but in k.h nh seems quite strange:

// nulls(n?) and infinities(w?)
#define nh ((I)0xFFFF8000)
#define wh ((I)0x7FFF)
#define ni ((I)0x80000000)
#define wi ((I)0x7FFFFFFF)
#define nj ((J)0x8000000000000000LL)

Why is it the (I)0xFFFF8000 value for nh? - Why they didn't just put simply (H)0x8000?

asked on Stack Overflow Oct 15, 2020 by egor7

1 Answer


I suspect the extra bits are used to represent the null inside the interpreter or virtual machine, to differentiate it from the short value 0x8000. Using the extra bits to store the non-integer values allows the full use of the 16 bits to represent integers. This avoids having to promote the 0x8000 bit pattern to a 32-bit value, and makes it more efficient to store and process lists of shorts.

When you use vs to convert to binary, it looks like it forces 16-bit output, masking off the special bits. However, this isn't the internal binary representation of the special value, which you may be able to see using 0b as the first parameter. For example:

q)0b vs 0W

I don't have access to a q prompt to try it for 0Nh but you can experiment with it.

This is all speculative since I don't have any special knowledge of the q implementation, but I have built several interpreters and VMs and this is what makes sense to me.

answered on Stack Overflow Dec 10, 2020 by Josh Segall

User contributions licensed under CC BY-SA 3.0