More compact format
Common in APIs, logs, and systems that keep classic Unix timestamps.
Learn the difference between timestamp values in seconds and milliseconds so you can avoid bad conversions.
The visible difference is small, but the result changes completely.
Common in APIs, logs, and systems that keep classic Unix timestamps.
Common in JavaScript and systems that need finer time resolution.
That shifts the date and breaks validations or comparisons.
That is the simplest clue to suspect the format.
A readable date confirms or rejects your assumption.
Documentation or the source system often clarifies the format.
If so, normalize them early to avoid later mistakes.
When you read milliseconds as seconds, or the other way around, the resulting date can move by years or even centuries.
APIs, SDKs, and databases do not always document the format the same way, so it is worth confirming it before using it.
Finding the right scale saves you from chasing false timezone or serialization issues.
As a rule of thumb, 10 often means seconds and 13 often means milliseconds.
If the output looks impossible, you are probably using the wrong scale.
Logs, SDKs, and APIs often have a specific convention worth checking.
Once you identify the format, avoid mixing both scales inside the same workflow.
JavaScript often works with milliseconds while other systems send seconds.
Some responses change format depending on the language or endpoint.
It also happens when comparing time marks stored in different systems.
Almost always, but it is still best to validate with context and a real conversion.
It usually works with milliseconds in Date objects and related timestamps.
If the date looks far too old, too far in the future, or outside the expected context, check the scale.
No. First confirm the scale and only then check UTC versus local time.
Paste the number into the converter and compare both formats before you continue working with the date.