On this page
If you have ever looked at an API response or a server log and seen a number like 1708646400, you have seen a Unix timestamp. At first glance it looks like meaningless data, but it is one of the most practical ways to represent a point in time in software — and understanding how it works takes about five minutes.
What Is a Unix Timestamp?
A Unix timestamp is a single integer that represents the number of seconds elapsed since the Unix epoch: January 1, 1970, at 00:00:00 Coordinated Universal Time (UTC). This starting point is sometimes called "epoch time" or simply "the epoch."
The number 0 means exactly midnight on January 1, 1970. The number 1708646400 means that many seconds have passed since then — which works out to February 23, 2024. Every second that passes, the timestamp increments by 1.
The format was defined as part of the POSIX standard and adopted by Unix operating systems in the 1970s. Decades later, it remains the dominant way that backend systems, APIs, and databases store and exchange time values.
Why Developers Use Timestamps Instead of Formatted Dates
Formatted date strings like "February 23, 2024 3:00 PM" seem more readable, but they introduce a cluster of problems that timestamps avoid entirely.
Timezone ambiguity. "3:00 PM on February 23" means nothing without a timezone. A Unix timestamp is always relative to UTC, so there is no ambiguity regardless of where the server or the user is located.
Simple arithmetic. Calculating the difference between two timestamps is a subtraction: timestamp_b - timestamp_a gives you the elapsed time in seconds. With formatted strings, you need a parsing library, timezone conversion logic, and careful handling of daylight saving transitions.
Language agnostic. An integer is an integer in every programming language. A timestamp produced by a Python script can be consumed directly by a JavaScript frontend, a SQL database, or a Go microservice without any parsing or format negotiation.
Compact storage. A 10-digit integer takes far less space to store and transmit than a formatted date string, which matters at scale.
Converting Timestamps in JavaScript, Python, and SQL
Here is how to convert between Unix timestamps and human-readable dates in three common languages.
JavaScript
JavaScript works in milliseconds by default, so you multiply by 1000 when converting from a seconds-based Unix timestamp.
// Timestamp to readable date
const ts = 1708646400;
const date = new Date(ts * 1000);
console.log(date.toISOString()); // "2024-02-23T00:00:00.000Z"
// Current time as a Unix timestamp (seconds)
const now = Math.floor(Date.now() / 1000);
Python
Python's datetime module handles this cleanly with fromtimestamp for local time and utcfromtimestamp for UTC.
from datetime import datetime, timezone
ts = 1708646400
dt = datetime.fromtimestamp(ts, tz=timezone.utc)
print(dt.isoformat()) # 2024-02-23T00:00:00+00:00
# Current Unix timestamp
import time
now = int(time.time())
SQL
Most SQL databases include built-in functions for timestamp conversion.
-- MySQL / MariaDB
SELECT FROM_UNIXTIME(1708646400); -- 2024-02-23 00:00:00
-- PostgreSQL
SELECT TO_TIMESTAMP(1708646400); -- 2024-02-23 00:00:00+00
-- SQLite
SELECT DATETIME(1708646400, 'unixepoch'); -- 2024-02-23 00:00:00
Quick Reference Conversion Table
| Unix Timestamp | UTC Date and Time |
|---|---|
| 0 | 1970-01-01 00:00:00 UTC |
| 1000000000 | 2001-09-09 01:46:40 UTC |
| 1500000000 | 2017-07-14 02:40:00 UTC |
| 1708646400 | 2024-02-23 00:00:00 UTC |
| 2000000000 | 2033-05-18 03:33:20 UTC |
A useful mental anchor: 1 billion seconds after the epoch was September 9, 2001. We passed 1.7 billion in mid-2023 and will reach 2 billion in May 2033.
Seconds vs. Milliseconds: A Common Source of Bugs
One of the most frequent mistakes when working with timestamps is mixing up seconds and milliseconds. The Unix standard uses seconds, but JavaScript's Date object works in milliseconds, which means Date.now() returns a 13-digit number rather than the 10-digit number you would see in a server log or an API response that follows the Unix convention.
The fix is always the same: divide by 1000 to convert from milliseconds to seconds, or multiply by 1000 to go the other direction. Before you debug a timestamp conversion that is producing a date in 1970 or in the far future, check which unit you are working with.
Negative Timestamps and Dates Before 1970
The Unix epoch does not represent the beginning of time — it is just an arbitrary reference point. Dates before January 1, 1970 are represented as negative timestamps. December 31, 1969 at 23:59:59 UTC is timestamp -1. Most modern languages handle negative timestamps correctly, though you will occasionally encounter legacy systems that store timestamps as unsigned integers and cannot represent pre-1970 dates at all.
The Year 2038 Problem
Many older systems store Unix timestamps as a signed 32-bit integer. The maximum value a signed 32-bit integer can hold is 2,147,483,647, which corresponds to January 19, 2038, at 03:14:07 UTC. After that point, a 32-bit counter overflows and wraps around to a large negative number, which most systems would interpret as a date in 1901.
This is a real issue for embedded systems, older database schemas, and legacy applications that have not been updated to use 64-bit integers. A 64-bit signed integer can represent timestamps up to approximately 292 billion years in the future, so any system that has been modernized is not at risk.
Where You Will Encounter Unix Timestamps in Practice
Timestamps appear constantly in backend development: API responses from services like GitHub, Stripe, and Twitter all return timestamps in Unix format. Log files on Linux systems record events as timestamps. Database schemas often store created_at and updated_at fields as integers rather than datetime strings for the storage and query performance benefits. JWT authentication tokens include an exp (expiration) field encoded as a Unix timestamp.
Once you know what to look for, you start seeing them everywhere — and converting them becomes second nature.
Try it instantly
Use these free FixTools right in your browser. No sign-up, no uploads—your data stays private.
Frequently asked questions
What does a Unix timestamp actually count?
A Unix timestamp counts the number of seconds (or milliseconds in some systems) that have elapsed since January 1, 1970, at 00:00:00 UTC. Leap seconds are not counted, which means Unix time is technically not perfectly synchronized with atomic time, but the difference is negligible for most applications.
Why do some timestamps have 13 digits instead of 10?
A 10-digit timestamp represents seconds since the Unix epoch. A 13-digit timestamp represents milliseconds, which is what JavaScript's Date.now() returns. If you are getting unexpected results when converting, check whether the value is in seconds or milliseconds first.
What is the year 2038 problem?
Many older systems store Unix timestamps as a signed 32-bit integer, which can hold a maximum value of 2,147,483,647. That value corresponds to January 19, 2038, at 03:14:07 UTC. After that moment, 32-bit systems will overflow and may interpret the time as 1901. Modern systems using 64-bit integers will not be affected.
Can a Unix timestamp be negative?
Yes. A negative Unix timestamp represents a date before January 1, 1970. For example, -86400 represents December 31, 1969, at 00:00:00 UTC. Most modern programming languages handle negative timestamps correctly, though very old systems may not.
Is Unix time the same as UTC?
Unix time is based on UTC but is not identical to it because it does not account for leap seconds. In practice the difference is a matter of seconds accumulated over decades, which does not affect date-level calculations.
O. Kimani
Software Developer & Founder, FixTools
Building FixTools — a single destination for free, browser-based productivity tools. Every tool runs client-side: your files never leave your device.
About the author →