The return type (int or long). It defaults to time_t, which is normally 32 bits on a 32-bit system and 64 bits on a 64-bit system.
A signed integer representing the unix time which is equivalent to this SysTime.
import core.time : hours; import std.datetime.date : DateTime; import std.datetime.timezone : SimpleTimeZone, UTC; assert(SysTime(DateTime(1970, 1, 1), UTC()).toUnixTime() == 0); auto pst = new immutable SimpleTimeZone(hours(-8)); assert(SysTime(DateTime(1970, 1, 1), pst).toUnixTime() == 28800); auto utc = SysTime(DateTime(2007, 12, 22, 8, 14, 45), UTC()); assert(utc.toUnixTime() == 1_198_311_285); auto ca = SysTime(DateTime(2007, 12, 22, 8, 14, 45), pst); assert(ca.toUnixTime() == 1_198_340_085); static void testScope(scope ref SysTime st) @safe { auto result = st.toUnixTime(); }
Converts this SysTime to unix time (i.e. seconds from midnight, January 1st, 1970 in UTC).
The C standard does not specify the representation of time_t, so it is implementation defined. On POSIX systems, unix time is equivalent to time_t, but that's not necessarily true on other systems (e.g. it is not true for the Digital Mars C runtime). So, be careful when using unix time with C functions on non-POSIX systems.
By default, the return type is time_t (which is normally an alias for int on 32-bit systems and long on 64-bit systems), but if a different size is required than either int or long can be passed as a template argument to get the desired size.
If the return type is int, and the result can't fit in an int, then the closest value that can be held in 32 bits will be used (so int.max if it goes over and int.min if it goes under). However, no attempt is made to deal with integer overflow if the return type is long.