Windows error 0x00000234, 564

Detailed Error Information

INVALID_LDT_DESCRIPTOR[1]

MessageIndicates that the user supplied an invalid descriptor when trying to set up Ldt descriptors.
Declared inwinerror.h

This appears to be a raw Win32 error. More information may be available in error 0x80070234.

HRESULT analysis[2]

This is probably not the correct interpretation of this error. The Win32 error above is more likely to indicate the actual problem.
FlagsSeveritySuccess

This code indicates success, rather than an error. This may not be the correct interpretation of this code, or possibly the program is handling errors incorrectly.

Reserved (R)false
OriginMicrosoft
NTSTATUSfalse
Reserved (X)false
FacilityCode0 (0x000)
NameFACILITY_NULL[2][1]
DescriptionThe default facility code.[2][1]
Error Code564 (0x0234)

Possible solutions

1

Efficient way of determining minimum field size required to store variances in user input

c++
bit-manipulation

Though I can't see a reason for all that... Why just not to compare an input with the std::numeric_limits<uint16_t>::max()? If the input gives a larger value then you need to use uint32_t.


Answering your edit:

I suppose for for better performance you should use hardware specific low level instructions. You could iterate over 32-bit parts of the input 128-bit value and subsequently add each one to the some variable and check the difference between next value and current sum. If the difference isn't equal to the sum then you should skip this 128-bit value, otherwise you'll get the necessary result in the end. The sample follows:

uint32_t get_value( uint32_t v1, uint32_t v2, uint32_t v3, uint32_t v4)
{
  uint32_t temp = v1; 
  if ( temp - v2 != temp ) throw exception;
  temp += v2; if ( temp - v3 != temp ) throw exception;
  temp += v3; if ( temp - v4 != temp ) throw exception;
  temp = v4;
  return temp;
}

In this C++ example it may be looks silly but I believe in the assembly code this should efficiently process the input stream.

answered on Stack Overflow Nov 10, 2010 by Kirill V. Lyadvinsky • edited Nov 10, 2010 by Kirill V. Lyadvinsky
1

Efficient way of determining minimum field size required to store variances in user input

c++
bit-manipulation

Store the first full 128 bit number you encounter, then push the lower order 32 bits of it onto a vector, set bool reject_all = false. For each remaining number, if high-order (128-32=96) bits differ from the first number's then set reject_all = true, otherwise push their lower-order bits on the vector. At the end of the loop, use reject_all to decide whether to use the vector of values.

answered on Stack Overflow Nov 10, 2010 by Tony Delroy • edited Nov 10, 2010 by Tony Delroy
0

Efficient way of determining minimum field size required to store variances in user input

c++
bit-manipulation

The most efficient way to store a series of unsigned integers in the range [0, (2^32)-1] is by just using uint32_t. Jumping through hoops to save 2 bytes from user input is not worth your time--the user cannot possibly, in his lifetime, enter enough integers that your code would have to start compressing them. He or she would die of old age long before memory constraints became apparent on any modern system.

answered on Stack Overflow Nov 10, 2010 by Jonathan Grynspan
0

Efficient way of determining minimum field size required to store variances in user input

c++
bit-manipulation

It looks like you have to come up with a cumulative bitmask -- which you can then look at to see whether you have trailing or leading constant bits. An algorithm that operates on each input will be required (making it an O(n) algorithm, where n is the number of values to inspect).

The algorithm would be similar to something like what you've already done:

unsigned long long bitmask = 0uL;
std::size_t count = val.size();
for (std::size_t i = 0; i < count; ++i)
  bitmask |= val[i];

You can then check to see how many bits/bytes leading/trailing can be made constant, and whether you're going to use the full 32 bits. If you have access to SSE instructions, you can vectorize this using OpenMP.

There's also a possible optimization by short-circuiting to see if the distance between the first 1 bit and the last 1 bit is already greater than 32, in which case you can stop.

For this algorithm to scale better, you're going to have to do it in parallel. Your friend would be vector processing (maybe using CUDA for Nvidia GPUs, or OpenCL if you're on the Mac or on platforms that already support OpenCL, or just OpenMP annotations).

answered on Stack Overflow Nov 10, 2010 by Dean Michael
0

Efficient way of determining minimum field size required to store variances in user input

c++
bit-manipulation

Use

uint32_t ORVal = 0;
uint32_t ANDVal = 0xFFFFFFFF;

ORVal  |= input1;
ANDVal &= input1;
ORVal  |= input2;
ANDVal &= input2;
ORVal  |= input3;
ANDVal &= input3; // etc.

// At end of input...
mask = ORVal ^ ANDVal; 
// bit positions set to 0 were constant, bit positions set to 1 changed

A bit position in ORVal will be 1 if at least one input had 1 in that position and 0 if ALL inputs had 0 in that position. A bit position in ANDVal will be 0 if at least one input had 0 in that bit position and 1 if ALL inputs had 1 in that position.

If a bit position in inputs was always 1, then ORVal and ANDVal will both be set to 1. If a bit position in inputs was always 0, then ORVal and ANDVal will both be set to 0. If there was a mix of 0 and 1 in a bit position then ORVal will be set to 1 and ANDVal set to 0, hence the XOR at the end gives the mask for bit positions that changed.

answered on Stack Overflow Nov 10, 2010 by JohnPS • edited Nov 10, 2010 by JohnPS

Comments

Leave a comment

(plain text only)

Sources

  1. winerror.h from Windows SDK 10.0.14393.0
  2. https://msdn.microsoft.com/en-us/library/cc231198.aspx

User contributions licensed under CC BY-SA 3.0