What is Unix Time?
Unix time is a system for describing a point in time as the number of seconds that have elapsed since January 1, 1970, at 00:00:00 UTC, not counting leap seconds. This creates Unix timestamps used throughout computing.
Unix Time Definition
Unix time, also known as POSIX time or epoch time, is the number of seconds since the Unix epoch. The epoch is defined as 00:00:00 UTC on January 1, 1970 UTC, which is also known as the "Unix epoch" or "POSIX epoch".
This system provides a simple, universal way to represent time that works consistently across different time zones, operating systems, and programming languages. You can see this in action with our live timestamp clock.
The Unix Epoch
January 1, 1970, 00:00:00 UTC was chosen as the Unix epoch for several practical reasons:
- It was recent enough to be relevant to early computer systems.
- Far enough in the past to avoid negative timestamps for recent dates.
- Aligned with the beginning of a decade for easy mental calculation.
- Close to when the Unix operating system development began at Bell Labs.
How Unix Time Works
Unix time is calculated by counting the seconds since the epoch:
- Epoch (0): January 1, 1970, 00:00:00 UTC.
- Positive numbers: Dates after January 1, 1970.
- Negative numbers: Dates before January 1, 1970.
Examples
- 0 = January 1, 1970, 00:00:00 UTC.
- 86400 = January 2, 1970, 00:00:00 UTC (24 hours later).
- 1640995200 = January 1, 2022, 00:00:00 UTC.
- -86400 = December 31, 1969, 00:00:00 UTC.
Advantages of Unix Time
Simplicity
Unix time represents any moment as a single integer, making it extremely easy to store, transmit, and process.
Time Zone Independence
All Unix timestamps are in UTC, eliminating confusion about time zones. Local time is calculated by applying timezone offsets when displaying to users.
Easy Arithmetic
Calculating time differences is simple subtraction. Adding or subtracting time periods involves basic addition and subtraction.
Sorting and Comparison
Chronological order matches numerical order, making database queries and sorting operations straightforward.
Efficiency
A single 32-bit or 64-bit integer can represent any timestamp, using minimal storage space and processing power.
Unix Time Precision
Seconds
Traditional Unix time counts whole seconds. This is sufficient for most applications like file timestamps, user registration dates, and log entries.
Milliseconds
Many modern systems use milliseconds for greater precision. JavaScript, for example, uses milliseconds by default. Use our converter to work with both formats:
- Unix seconds: 1640995200.
- Unix milliseconds: 1640995200000.
Microseconds and Nanoseconds
High-precision applications may use microseconds (millionths of a second) or nanoseconds (billionths of a second) for extremely accurate timing. Learn more about different time units in computing. Our timestamp generator can create test data for various precisions.
The Year 2038 Problem
32-bit signed integers can represent Unix timestamps up to 2,147,483,647 seconds after the epoch, which corresponds to January 19, 2038, at 03:14:07 UTC.
The Solution
Modern systems use 64-bit integers for timestamps, which can represent dates far into the future (approximately 292 billion years from the epoch).
Leap Seconds and Unix Time
Unix timestamps deliberately ignore leap seconds to maintain simplicity. This means:
- Unix time assumes each day has exactly 86,400 seconds.
- During leap seconds, Unix clocks may repeat a second or skip ahead.
- For most applications, this difference is negligible.
- High-precision systems may need special handling for leap seconds.
Unix Time vs Other Time Systems
ISO 8601
ISO 8601 strings (e.g., "2022-01-01T00:00:00Z") are human-readable but larger and slower to process than Unix timestamps.
Julian Day Numbers
Used in astronomy and some databases, but less common in general computing.
Windows FILETIME
Windows systems use 100-nanosecond intervals since January 1, 1601, but Unix time is more widely supported.
Made by Andy Moloney
© unixtime.io