INT64_C and UINT64_C should be defined in more cross-platform way - by DanielB_Logi

Status : 

  Fixed<br /><br />
		This item has been fixed in the current or upcoming version of this product.<br /><br />
		A more detailed explanation for the resolution of this particular item may have been provided in the comments section.

Sign in
to vote
ID 685726 Comments
Status Closed Workarounds
Type Bug Repros 0
Opened 8/25/2011 9:15:42 PM
Access Restriction Public


In converting some cross-platform C++ projects from VS 2008 to Visual Studio 2010, we've tried out some of the new standard libraries that VS 2010 now uses, instead of defining our own for Windows.  One of those is stdint.h

However, stdint.h defines INT64_C and UINT64_C in a way that isn't fully compatible with Linux and Mac.  The VS 2010 version of INT64_C and UINT64_C looks like:

#define INT64_C(x)		((x) + (INT64_MAX - INT64_MAX))
#define UINT64_C(x)		((x) + (UINT64_MAX - UINT64_MAX))

However, on Linux for example, its defined this way:
/* Signed.  */
# define INT8_C(c)      c
# define INT16_C(c)     c
# define INT32_C(c)     c
# if __WORDSIZE == 64
#  define INT64_C(c)    c ## L
# else
#  define INT64_C(c)    c ## LL
# endif

/* Unsigned.  */
# define UINT8_C(c)     c
# define UINT16_C(c)    c
# define UINT32_C(c)    c ## U
# if __WORDSIZE == 64
#  define UINT64_C(c)   c ## UL
# else
#  define UINT64_C(c)   c ## ULL
# endif

and on Mac, this way:

#define INT8_C(v)      (v)
#define INT16_C(v)     (v)
#define INT32_C(v)     (v)
#define INT64_C(v)    (v ## LL)

#define UINT8_C(v)     (v ## U)
#define UINT16_C(v)    (v ## U)
#define UINT32_C(v)    (v ## U)
#define UINT64_C(v)   (v ## ULL)

The problem with the VS 2010 definition, is that code like:
#define ONEDAY_REFERENCETIME INT64_C(24 * 60 * 60 * 10000000)
class Test
	Test() : m_value(ONEDAY_REFERENCETIME) { }

has a warning:
warning C4307: '*' : integral constant overflow

Instead of the current definition, VS 2010 and later should use a more compatible definition:

/* Signed.  */
#define INT8_C(c)      c
#define INT16_C(c)     c
#define INT32_C(c)     c
#define INT64_C(c)    c ## i64

/* Unsigned.  */
#define UINT8_C(c)     c ## U
#define UINT16_C(c)    c ## U
#define UINT32_C(c)    c ## U
#define UINT64_C(c)   c ## ui64
Sign in to post a comment.
Posted by Microsoft on 11/4/2011 at 9:38 PM

Thanks for reporting this issue. INT64_C(24 * 60 * 60 * 10000000) is actually prohibited by the C Standard. C99 7.18.4/2 requires that "The argument in any instance of these macros shall be an unsuffixed integer constant (as defined in with a value that does not exceed the limits for the corresponding type." 24 * 60 * 60 * 10000000 is an expression, not a single constant.

However, I've changed our implementation to

#define INT8_C(x) (x)
#define INT16_C(x) (x)
#define INT32_C(x) (x)
#define INT64_C(x) (x ## LL)

#define UINT8_C(x) (x)
#define UINT16_C(x) (x)
#define UINT32_C(x) (x ## U)
#define UINT64_C(x) (x ## ULL)

because of a potential issue with enormous constants. According to C++11 2.14.2 [lex.icon]/2, unsuffixed integer constants in decimal have the first type in the list "int, long, long long" that they can be represented in. /3 says "A program is ill-formed if one of its translation units contains an integer literal that cannot be represented by any of the allowed types." This makes 18446744073709551615 ill-formed, so ((18446744073709551615) + (UINT64_MAX - UINT64_MAX)) is also ill-formed. As far as I can tell, this means that we must use token pasting.

(I'm actually not sure whether 2.14.2/3 is applied before macro expansion, so perhaps the original implementation was technically correct. But I believe my new implementation is also correct, and C99 seems to be suggesting a token-pasting implementation with its requirement for unsuffixed integer constants.)

Note that C99 7.18.4/3 requires the output of these macros to "have the same type as would an expression of the corresponding type converted according to the integer promotions". The integer promotions widen tiny types (like short and unsigned short) to int, which is why my UINT8_C and UINT16_C implementations don't paste U. (It is possible that I should be saying (x + 0) in order to actually activate the integer promotions, given that C++ can detect types with overloading and templates - but I can't spend forever analyzing these macros. :->)

If you have any further questions, feel free to E-mail me at .

Stephan T. Lavavej
Visual C++ Libraries Developer
Posted by EricLeong [Feedback Moderator] on 8/26/2011 at 2:31 AM
Thank you for submitting feedback on Visual Studio 2010 and .NET Framework. Your issue has been routed to the appropriate VS development team for review. We will contact you if we require any additional information.
Posted by UnitUniverse on 8/26/2011 at 12:36 AM
The general integer type is not definited as 32/16/64/any byte-width as anyone thought, which means it is STD-legal for a specific C++ compiler that allows a generic-integer being byte-width any same to or greater than two. To a category 'long long', it may have at least 64 bits. The 'at least' means it should also legal for a compiler that once lets the 'long long' be 128 bits or even wider.
In my opinion, guys code the code formals under those plateforms are bad because too many assumes that a generic integer should be a specific bit-width at any compilers are made by them(smp: They thought the int must always be 32bits, the casts betweens a pointer to a 'long' integer is always available), and that is why the non-STD forced type-defines, such as the INT16/INT32/size_t/intptr_t/,etc. come out under the MS DEV. platforms.
The MS's way is to keep the portable of the future version of the compiling systems. After all, you never should let a INT64 becames to 128-bw integer for a wrong definition "#define INT64 long long".
Posted by MS-Moderator01 on 8/25/2011 at 9:41 PM
Thank you for your feedback, we are currently reviewing the issue you have submitted. If this issue is urgent, please contact support directly(